Server configuration

Installation

Make sure to install IO shield to chassis before mounting motherboard.

Make bootable CentOS disk

diskutil list
/dev/disk3 (external, physical):
\#:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *1.0 TB     disk3
1:                       0xEF                         8.9 MB     disk3s2

>diskutil unmountDisk /dev/disk3
sudo dd if=/Users/xiyuanbao/Downloads/CentOS-7-x86_64-DVD-1908.iso of=/dev/rdisk3  bs=1m\n
diskutil list
diskutil eject /dev/disk3

If we use windows to create the boot device instead of linux dd, the label of device will miss several characters.

Supermicro motherboard use onboard graphic card by default(1080p VGA output).

We need to disable onboard graphic card in BIOS(press delete key to enter) to enable external graphic card.

However, you will see garbled message when install CentOS with Nvidia card. We must install the system and Nvidia driver with onboard graphic card.

CentOS system configuration

Input source

After adding pinyin input source, use short cut
Super+space

Terminal shortcut

settings->keyboard->add->


image.png

root using /etc/profile

add in /root/.bashrc
source /etc/profile

CLI history size

In /etc/profile

# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=100000
HISTFILESIZE=200000

Nvidia driver

or https://linuxconfig.org/how-to-install-the-nvidia-drivers-on-centos-7-linux
CentOS recommended way: rpm

download driver: https://www.nvidia.com/Download/driverResults.aspx/156086/en-us
Make sure kernel version is consistent:
sudo yum install "kernel-devel-uname-r == $(uname -r)"

[xbao@localhost Downloads]$ lshw -numeric -C display
WARNING: you should run this program as super-user.
  *-display                 
       description: VGA compatible controller
       product: ASPEED Graphics Family [1A03:2000]
       vendor: ASPEED Technology, Inc. [1A03]
       physical id: 0
       bus info: pci@0000:04:00.0
       version: 41
       width: 32 bits
       clock: 33MHz
       capabilities: vga_controller cap_list rom
       configuration: driver=ast latency=0
       resources: irq:17 memory:9c000000-9cffffff memory:9d000000-9d01ffff ioport:2000(size=128)
  *-display
       description: VGA compatible controller
       product: TU102 [GeForce RTX 2080 Ti] [10DE:1E04]
       vendor: NVIDIA Corporation [10DE]
       physical id: 0
       bus info: pci@0000:af:00.0
       version: a1
       width: 64 bits
       clock: 33MHz
       capabilities: vga_controller bus_master cap_list rom
       configuration: `driver=nouveau` or `driver=nvidia` latency=0
       resources: iomemory:39bf0-39bef iomemory:39bf0-39bef irq:331 memory:ed000000-edffffff memory:39bfe0000000-39bfefffffff memory:39bff0000000-39bff1ffffff ioport:e000(size=128) memory:ee000000-ee07ffff

It turns out under efi mode, off-board card will stuck before login:
https://access.redhat.com/discussions/3550251
Legacy mode for the pci-e solves the problem.

Sensors

https://github.com/netdata/netdata
Shared Software compiled from source: /usr/syssoft
https://blog.csdn.net/jajavaja/article/details/48212009

freeipmi:
https://github.com/netdata/netdata/tree/master/collectors/freeipmi.plugin
As root:

sudo yum install freeipmi freeipmi-devel netdata-freeipmi.x86_64
image.png

image.png

https://browser.geekbench.com/v5/cpu/1118615

/etc/profile not functional

add this in ~/.bashrc for every user:

if [ -f /etc/profile ]; then
        . /etc/profile
fi

SSH

Check ip:

curl ifconfig.me

https://blog.csdn.net/YlanHds/article/details/80164006

rpm -qa|grep ssh
openssh-clients-7.4p1-21.el7.x86_64
openssh-server-7.4p1-21.el7.x86_64
libssh2-1.8.0-3.el7.x86_64
openssh-7.4p1-21.el7.x86_64
sudo vi /etc/ssh/sshd_config

allow ip range:
https://kb.ucla.edu/articles/list-of-uc-related-ip-addresses
https://blog.51cto.com/kangyang/580871
https://www.cnblogs.com/dadonggg/p/8023511.html
https://bbs.csdn.net/topics/392258166

#In /etc/hosts.allow
sshd:192.168.10.88:allow
#In /etc/hosts.deny
sshd:ALL

Or:

firewall-cmd --permanent --zone=public --add-source=192.168.100.0/24
firewall-cmd --permanent --zone=public --list-sources
firewall-cmd --reload

firewall-cmd --permanent --zone=public --list-sources

ssh-X11

https://www.cnblogs.com/aiweixiao/p/6576186.html

vnc

https://www.jianshu.com/p/59722f72293f
https://github.com/TigerVNC/tigervnc/issues/592

yum install tigervnc-server -y
cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
#or 2.service, 3.service... for more users

change <user> to your usrname in the configuration
For root, use PIDFile=/root/.vnc/%H%i.pid
Firewall settings

firewall-cmd --permanent --add-service vnc-server
firewall-cmd --zone=public --add-port=5901-5905/tcp --permanent
firewall-cmd --reload

We need to uninstall dbus from anaconda:

conda uninstall -y dbus

start and stop server:

#first usr
vncserver :1
sudo systemctl daemon-reload
sudo systemctl enable vncserver@:1.service
#or just
vncserver
#stop
vncserver -kill :1

Status

systemctl status vncserver@:1   
vncserver -list                
ps aux |grep vnc            

Use RealVNC on Mac.

Vnc has to be started using the same account(not from su username)
https://forums.centos.org/viewtopic.php?t=4743
Also to log out of physical screen:
DISPLAY=:0.0 gnome-session-quit --force

Sometimes vnc desktop is blackscreen, in the log file one error message is dconf permission. So we need to :
sudo chmod -R 777 /run/user/1000

update

https://groups.google.com/g/tigervnc-users/c/ZqHU64Ilu_0
https://groups.google.com/a/continuum.io/g/anaconda/c/7Hz0nZA573E
https://forums.centos.org/viewtopic.php?t=66886
https://www.jianshu.com/p/05976dfd9dd2

Recently after power outage vnc does not work.

** (process:27039): WARNING **: 11:16:51.255: Could not make bus activated clients aware of XDG_CURRENT_DESKTOP=GNOME environment variable: Could not connect: Connection refused
/root/.vnc/xstartup: line 5: 27039 Terminated              /etc/X11/xinit/xinitrc

After inspection, the dbus and other related scripts are using the anaconda counterpart. To keep both, edit xstartup script under ~/.vnc for each user:

PATH_old=$PATH
export PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/pbs/bin:/opt/pbs/bin

unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
/etc/X11/xinit/xinitrc
# Assume either Gnome or KDE will be started by default when installed
# We want to kill the session automatically in this case when user logs out. In case you modify
# /etc/X11/xinit/Xclients or ~/.Xclients yourself to achieve a different result, then you should
# be responsible to modify below code to avoid that your session will be automatically killed
if [ -e /usr/bin/gnome-session -o -e /usr/bin/startkde ]; then
    vncserver -kill $DISPLAY
fi

export PATH=$PATH_old

nomachine

Installation is automatic. Open firewall for nomachine:
https://kifarunix.com/install-and-setup-nomachine-on-centos-8/

firewall-cmd --add-port=4000/tcp --add-port=4011-4999/udp --permanent
firewall-cmd --reload

firewall-cmd

firewall cheatsheet GUI also available

https://www.vultr.com/docs/installing-netdata-on-centos-7
https://www.cnblogs.com/hubing/p/6058932.html

Anconda

Download:https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh
Install to /usr/local/anaconda3
Edit environment variable:

vi /etc/profile
#add to bottom
export PATH=/usr/local/anaconda3/bin:$PATH

Template to add package to specific environment:

sudo env “PATH=$PATH” pip install install torch-0.4.0-cp36-cp36m-manylinux1_x86_64.whl

User jupyterhub to run jupyter notebook/lab server for multiple users

conda install -c conda-forge jupyterhub
jupyterhub --generate-config
vi jupyterhub_config.py 
c.Spawner.notebook_dir = '~'
c.PAMAuthenticator.encoding = 'utf8'
c.JupyterHub.admin_access = True #
c.Authenticator.whitelist = {'xbao'} #
c.Authenticator.admin_users={'xbao'}#
c.JupyterHub.ip = '128.97.31.179'# public network access
c.JupyterHub.port = 8888
c.Spawner.cmd = ['jupyterhub-singleuser']#
#c.JupyterHub.ssl_cert = ''#ssl
#c.JupyterHub.ssl_key = ''#sslkey
jupyterhub --config=/usr/local/anaconda_config/jupyterhub_config.py
sudo firewall-cmd --permanent --zone=public --add-port=8888/tcp
  success
firewall-cmd --zone=public --list-ports
  8888/tcp
image.png

run conda init $shell(bash,zsh,...) under every user

Creat env and kernel (not root):

conda create -n py37 python=3.7 ipykernel 
conda activate py37
python -m ipykernel install --user --name py37 #  Install kernel so we can see it under jupyter
conda deactivate 

Change kernel display name in each env

source activate myenv
python -m ipykernel install --user --name myenv --display-name "Python (myenv)"

Intel parallel studio cluster edition

https://cndaqiang.github.io/2018/01/15/intel-mpi-vasp/

prerequisites:su
yum install libXScrnSaver.x86_64(X.Org X11 libXss runtime library)

Install location:
    /opt/intel

Component(s) selected:
    Intel Trace Analyzer and Collector 2020                       456MB
        Intel Trace Analyzer for Intel(R) 64 Architecture
        Intel Trace Collector for Intel(R) 64 Architecture

    Intel Cluster Checker 2019 Update 6                           215MB
        Cluster Checker

    Intel VTune Profiler 2020                                     2.1GB
        Command line interface
        Sampling Driver kit
        Graphical user interface
        Platform Profiler

    Intel Inspector 2020                                          356MB
        Command line interface
        Graphical user interface

    Intel Advisor 2020                                            1.1GB
        Command line interface
        Graphical user interface
        Flow Graph Analyzer

    Intel C++ Compiler 19.1                                       1.3GB
        Intel C++ Compiler

    Intel Fortran Compiler 19.1                                   476MB
        Intel Fortran Compiler

    Intel Math Kernel Library 2020 for C/C++                      2.7GB
        Intel MKL core libraries for C/C++
        Cluster support for C/C++
        Intel TBB threading support
        PGI* C/C++ compiler support
        GNU* C/C++ compiler support

    Intel Math Kernel Library 2020 for Fortran                    2.6GB
        Intel MKL core libraries for Fortran
        Cluster support for Fortran
        GNU* Fortran compiler support
        Fortran 95 interfaces for BLAS and LAPACK

    Intel Integrated Performance Primitives 2020                  4.1GB
        Intel IPP single-threaded libraries: General package
        Intel IPP multi-threaded libraries

    Intel Threading Building Blocks 2020                           53MB
        Intel TBB

    Intel Data Analytics Acceleration Library 2020                3.8GB
        Intel Data Analytics Acceleration Library 2020

    Intel MPI Library 2019 Update 6                               854MB
        Intel MPI Benchmarks
        Intel MPI Library for applications running on Intel(R) 64
Architecture

    GNU* GDB 8.3                                                  230MB
        GNU* GDB 8.3 on Intel(R) 64
        Source of GNU* GDB 8.3
        Python sources

    Intel(R) Distribution for Python*                             6.1GB
        Intel(R) Distribution for Python* 3 for Linux*

   Install space required: 17.6GB

Driver parameters:
    Sampling driver install type: Driver will be built
    Load drivers: yes
    Reload automatically at reboot: yes
    Per-user collection mode: no
    Driver access will be restricted:
        Driver access group: vtune group will be created
        Driver permissions: 660

Installation Target:
    Install on the current system only

add in /etc/profile:

source /opt/intel/compilers_and_libraries/linux/bin/compilervars.sh intel64
source /opt/intel/compilers_and_libraries_2020.0.166/linux/mkl/bin/mklvars.sh intel64 ilp64
source /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpivars.sh -ofi_internal

Test
which icc ifort icpc mpiifort

autojump

needs to be installed under every user account

screen

sudo yum install screen
ctrl+a+d detach
scrren -ls
screen -r id
pkill screen

Aspect

sudo yum install environment-modules install the module command

module settings

https://zhuanlan.zhihu.com/p/50725572

tree of directories

sudo yum install tree
module setup
We need to add magic header as well:

#%Module 1.0
#
#  Intel MPI module for use with 'environment-modules' package:
#
env2 -from bash -to modulecmd "/opt/intel/compilers_and_libraries/linux/bin/compilervars.sh intel64" >> 2020.0.166
 env2 -from bash -to modulecmd "/opt/intel/compilers_and_libraries_2020.0.166/linux/mkl/bin/mklvars.sh intel64 ilp64" >> 2020.0.166
env2 -from bash -to modulecmd "/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpivars.sh -ofi_internal" >> 2020.0.166
$ module avail
------------------------------------------------------------ /etc/modulefiles -------------------------------------------------------------
mpi/gnu/openmpi/3.1.3          mpi/intel/intel_mpi/2020.0.166

So by default, we register intel compilers, mkl, libfabric from intel(1.7.0 alpha?) gnu compilers, but not intel mpi by add following to /etc/profile:

#export CC=mpiicc; export CXX=mpiicpc; export FC=mpiifort; export FF=mpiifort
source /opt/intel/compilers_and_libraries/linux/bin/compilervars.sh intel64
#source /opt/intel/compilers_and_libraries_2020.0.166/linux/mkl/bin/mklvars.sh intel64 ilp64
source /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpivars.sh -ofi_internal

export CC=mpicc; export CXX=mpicxx; export FC=mpif90; export FF=mpif77
#We need to unload first or it won't work
module unload mpi/gnu/openmpi/3.1.3
module load mpi/gnu/openmpi/3.1.3

Using candi to install dealii

https://github.com/koecher/candi/blob/master/deal.II-toolchain/platforms/supported/centos7.platform
https://github.com/dealii/candi

https://github.com/geodynamics/aspect/wiki/Installation-FAQ
Installation using Intel's MKL as BLAS/LAPACK:
In candi.cfg:

MKL=ON
MKL_DIR=${MKLROOT}/lib/intel64

dependencies from yum:

https://github.com/koecher/candi/blob/master/deal.II-toolchain/platforms/supported/centos7.platform

sudo yum install patch svn git wget @development-tools 
sudo yum install cmake patch libtool libtool-ltdl libtool-ltdl-devel \
 lua lua-devel \
 doxygen graphviz graphviz-devel qt-devel

up-to-date gcc yum source
https://blog.csdn.net/zhangpeterx/article/details/96141900

yum install devtoolset-8-gcc*
scl enable devtoolset-8 bash
#a new bash shell is opened so cannot be used in /etc/profile

However, to load gcc8 via /etc/profile, we need to use :

source scl_source enable devtoolset-8

To write this into a modulefile:

#write magic headers as above
env2 -from bash -to modulecmd "/opt/rh/devtoolset-8/enable" >>/etc/modulefiles/gcc/8.3.1

cmake3
sudo yum -y install cmake3
https://stackoverflow.com/questions/48831131/cmake-on-linux-centos-7-how-to-force-the-system-to-use-cmake3

$ sudo alternatives --install /usr/local/bin/cmake cmake /usr/bin/cmake 10 \
--slave /usr/local/bin/ctest ctest /usr/bin/ctest \
--slave /usr/local/bin/cpack cpack /usr/bin/cpack \
--slave /usr/local/bin/ccmake ccmake /usr/bin/ccmake \
--family cmake

$ sudo alternatives --install /usr/local/bin/cmake cmake /usr/bin/cmake3 20 \
--slave /usr/local/bin/ctest ctest /usr/bin/ctest3 \
--slave /usr/local/bin/cpack cpack /usr/bin/cpack3 \
--slave /usr/local/bin/ccmake ccmake /usr/bin/ccmake3 \
--family cmake
$ sudo alternatives --config cmake

There are 2 programs which provide 'cmake'.

  Selection    Command
-----------------------------------------------
   1           cmake (/usr/bin/cmake)
*+ 2           cmake (/usr/bin/cmake3)

Enter to keep the current selection[+], or type selection number: 1

postpone

sudo yum install openmpi3 openmpi3-devel

devel with install mpicc

export CC=mpicc; export CXX=mpicxx; export FC=mpif90; export FF=mpif77

Test other options with openmpi

https://groups.google.com/forum/#!topic/dealii/k7Goj-sGM3A

in candi.cfg

# enable machine-specific optimizations (implies -march=native AVX etc.)?
NATIVE_OPTIMIZATIONS=true

# Would you like to build stable version of deal.II?
# If STABLE_BUILD=false, then the development version of deal.II will be
# installed.
#STABLE_BUILD=true
STABLE_BUILD=false

#comment these two
#once:petsc
#once:slepc

Flags

Intel mpi compiler will produce tons of segmentation faults(error 139 or 2) when building deal ii since the detected mpi seem to be openmpi(even when openmpi is not installed), change to openmpi will solve this problem.
intel-linux-compilation-commands
Spack

Content below shows installation test with Intel series(failed with candi but succeed with spack)
export CC=mpiicc; export CXX=mpiicpc; export FC=mpiifort; export FF=mpiifort

./candi.sh --platform=deal.II-toolchain/platforms/supported/centos7.platform -p /opt/bin/ -j 112
The stack size should be large enough to avoid related error

ulimit -s 99999

error 1:petsc
Could not find a sufficient PETSC installation: PETSC is compiled against a different MPI library than the one deal.II picked up.
-- DEAL_II_WITH_PETSC has unmet external dependencies.
CMake Error at cmake/configure/configure_3_petsc.cmake:141 (MESSAGE):


  Could not find the petsc library!

  Could not find a sufficient PETSC installation:

  PETSC has to be compiled against the same MPI library as deal.II but the
  link line of PETSC contains:

    /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release_mt/libmpi.so

  which is not listed in MPI_LIBRARIES:

    MPI_LIBRARIES = "/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/libmpicxx.so /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/libmpifort.so /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so /lib64/librt.so /lib64/libpthread.so /lib64/libdl.so"



  Please ensure that the petsc library version 3.3.0 or newer is installed on
  your computer and is configured with the same mpi options as deal.II

  If the library is not at a default location, either provide some hints

  for the autodetection:

  PETSc installed with --prefix=<...> to a destination:

      $ PETSC_DIR="..." cmake <...>
      $ cmake -DPETSC_DIR="..." <...>

  PETSc compiled in source tree:

      $ PETSC_DIR="..."  PETSC_ARCH="..." cmake <...>
      $ cmake -DPETSC_DIR="..." -DPETSC_ARCH="..." <...>

  or set the relevant variables by hand in ccmake.



Call Stack (most recent call first):
  /opt/bin/tmp/build/deal.II-v9.1.1/CMakeFiles/CMakeTmp/evaluate_expression.tmp:1 (FEATURE_PETSC_ERROR_MESSAGE)
  cmake/macros/macro_evaluate_expression.cmake:30 (INCLUDE)
  cmake/macros/macro_configure_feature.cmake:267 (EVALUATE_EXPRESSION)
  cmake/configure/configure_3_petsc.cmake:160 (CONFIGURE_FEATURE)
  cmake/macros/macro_verbose_include.cmake:19 (INCLUDE)
  CMakeLists.txt:124 (VERBOSE_INCLUDE)


-- Configuring incomplete, errors occurred!
See also "/opt/bin/tmp/build/deal.II-v9.1.1/CMakeFiles/CMakeOutput.log".
See also "/opt/bin/tmp/build/deal.II-v9.1.1/CMakeFiles/CMakeError.log".
Failure with exit status: 1
Exit message: There was a problem configuring dealii v9.1.1.

We need to ignore petsc:
In candi.cfg and deal.II-toolchain/packages/default.packages, comment lines:

#once:petsc
#once:slepc

Delete petsc slepc:

cd /opt/bin
sudo rm -r petsc-3.11.3 slepc-3.11.2
error2

https://software.intel.com/en-us/forums/intel-distribution-for-python/topic/797943

https://software.intel.com/en-us/articles/intel-mpi-library-2019-over-libfabric

CMake Error at /usr/share/cmake/Modules/CMakeTestCXXCompiler.cmake:54 (message):
  The C++ compiler
  "/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpiicpc"
  is not able to compile a simple test program.

  It fails with the following output:

   Change Dir: /opt/bin/tmp/build/deal.II-v9.1.1/CMakeFiles/CMakeTmp
......
ld: warning: libfabric.so.1, needed by
  /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so,
  not found (try using -rpath or -rpath-link)
......
gmake[1]: *** [cmTryCompileExec1627353713] Error 1

  gmake[1]: Leaving directory
  `/opt/bin/tmp/build/deal.II-v9.1.1/CMakeFiles/CMakeTmp'

  gmake: *** [cmTryCompileExec1627353713/fast] Error 2

We need to link libraries(included in /etc/profile already):
source /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpivars.sh -ofi_internal

use rpm -qf $(which fi_info) to see libfabric version:

intel-mpi-rt-2019.6-166-2019.6-166.x86_64
error3:
Verifying Fortran/C Compiler Compatibility - Failed

Using cmake3 will solve this problem(making alias is not enough, need to use alternatives)

"error" 4 during building deal.ii(using module to load intel mpi, cmake 3.14 is used)
-- Include /usr/syssoft/bin/tmp/unpack/deal.II-v9.1.1/cmake/configure/configure_1_mpi.cmake
-- MPI_MPI_H not found! Call:
--     FIND_FILE(MPI_MPI_H NAMES mpi.h HINTS)
--   MPI_VERSION: 3.1
--   MPI_LIBRARIES:
--   MPI_INCLUDE_DIRS:
--   MPI_USER_INCLUDE_DIRS:
--   MPI_CXX_FLAGS:
--   MPI_LINKER_FLAGS:
-- Found MPI
-- DEAL_II_WITH_MPI successfully set up with external dependencies.

cmake3.9.6 can set intel mpi correctly when building deal.ii(download from github and use alternative to add to env)
https://github.com/Kitware/CMake/releases/tag/v3.9.6
build libfabric from source may also solve this problem(not tested)

-- Include /usr/syssoft/bin/tmp/unpack/deal.II-v9.1.1/cmake/configure/configure_1_mpi.cmake
-- Found MPI_MPI_H
--   MPI_VERSION: 3.1
--   MPI_LIBRARIES: /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/libmpicxx.so;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/libmpifort.so;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/librt.so;/usr/lib64/libpthread.so;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/libmpifort.so;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/librt.so;/usr/lib64/libpthread.so;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/libmpifort.so;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/librt.so;/usr/lib64/libpthread.so
--   MPI_INCLUDE_DIRS: /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/include;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/include
--   MPI_USER_INCLUDE_DIRS: /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/include;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/include
--   MPI_CXX_FLAGS:
--   MPI_LINKER_FLAGS: -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib
-- Found MPI
-- DEAL_II_WITH_MPI successfully set up with external dependencies.
Remaining prblems (set up with bundled packages)
  • DEAL_II_WITH_THREADS (TBB)
  • DEAL_II_WITH_BOOST
  • DEAL_II_WITH_MUPARSER
  • DEAL_II_WITH_UMFPACK

The latter two can be solved through:

yum install muParser muParser-devel suitesparse suitesparse-devel

TBB from intel and boost from yum cannot be recognized correctly with cmake 3.9.6+intel compiler.
TBB from yum is 4.1.9, 4.2 is required for deal.ii.

###
#
#  deal.II configuration:
#        CMAKE_BUILD_TYPE:       DebugRelease
#        BUILD_SHARED_LIBS:      ON
#        CMAKE_INSTALL_PREFIX:   /usr/syssoft/bin/deal.II-v9.1.1
#        CMAKE_SOURCE_DIR:       /usr/syssoft/bin/tmp/unpack/deal.II-v9.1.1
#                                (version 9.1.1, shortrev 777cf92)
#        CMAKE_BINARY_DIR:       /usr/syssoft/bin/tmp/build/deal.II-v9.1.1
#        CMAKE_CXX_COMPILER:     Intel 19.1.0.20191121 on platform Linux x86_64
#                                /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpiicpc
#
#  Configured Features (DEAL_II_ALLOW_BUNDLED = ON, DEAL_II_ALLOW_AUTODETECTION = ON):
#      ( DEAL_II_WITH_64BIT_INDICES = OFF )
#      ( DEAL_II_WITH_ADOLC = OFF )
#      ( DEAL_II_WITH_ARPACK = OFF )
#      ( DEAL_II_WITH_ASSIMP = OFF )
#        DEAL_II_WITH_BOOST set up with bundled packages
#        DEAL_II_WITH_COMPLEX_VALUES = ON
#      ( DEAL_II_WITH_CUDA = OFF )
#        DEAL_II_WITH_CXX14 = ON
#        DEAL_II_WITH_CXX17 = ON
#      ( DEAL_II_WITH_GINKGO = OFF )
#      ( DEAL_II_WITH_GMSH = OFF )
#      ( DEAL_II_WITH_GSL = OFF )
#        DEAL_II_WITH_HDF5 set up with external dependencies
#        DEAL_II_WITH_LAPACK set up with external dependencies
#        DEAL_II_WITH_METIS set up with external dependencies
#        DEAL_II_WITH_MPI set up with external dependencies
#        DEAL_II_WITH_MUPARSER set up with bundled packages
#      ( DEAL_II_WITH_NANOFLANN = OFF )
#      ( DEAL_II_WITH_NETCDF = OFF )
#        DEAL_II_WITH_OPENCASCADE set up with external dependencies
#        DEAL_II_WITH_P4EST set up with external dependencies
#      ( DEAL_II_WITH_PETSC = OFF )
#      ( DEAL_II_WITH_SCALAPACK = OFF )
#      ( DEAL_II_WITH_SLEPC = OFF )
#      ( DEAL_II_WITH_SUNDIALS = OFF )
#      ( DEAL_II_WITH_SYMENGINE = OFF )
#        DEAL_II_WITH_THREADS set up with bundled packages
#        DEAL_II_WITH_TRILINOS set up with external dependencies
#        DEAL_II_WITH_UMFPACK set up with bundled packages
#        DEAL_II_WITH_ZLIB set up with external dependencies
#
#  Component configuration:
#      ( DEAL_II_COMPONENT_DOCUMENTATION = OFF )
#        DEAL_II_COMPONENT_EXAMPLES
#      ( DEAL_II_COMPONENT_PACKAGE = OFF )
#      ( DEAL_II_COMPONENT_PYTHON_BINDINGS = OFF )
#
#  Detailed information (compiler flags, feature configuration) can be found in detailed.log
#
#  Run  $ make info  to print a help message with a list of top level targets
#
###

TBB with yum source (4.1.9):

-- Include /usr/syssoft/bin/tmp/unpack/deal.II-v9.1.1/cmake/configure/configure_1_threads.cmake
-- Found TBB_LIBRARY
-- TBB_DEBUG_LIBRARY not found! Call:
--     FIND_LIBRARY(TBB_DEBUG_LIBRARY NAMES tbb_debug HINTS PATH_SUFFIXES lib lib64 lib)
-- Found TBB_INCLUDE_DIR
--   TBB_VERSION: 4.1
--   TBB_LIBRARIES: /usr/lib64/libtbb.so
--   TBB_INCLUDE_DIRS: /usr/include
--   TBB_USER_INCLUDE_DIRS: /usr/include
-- Found TBB
-- The externally provided TBB library is older than version 4.2.0, which cannot be used with deal.II.
-- DEAL_II_WITH_THREADS has unmet external dependencies.
-- DEAL_II_WITH_THREADS successfully set up with bundled packages.

Here are the default paths searched in cmake:

(base) [root@localhost lib64]# ls -lt|grep tbb
lrwxrwxrwx.  1 root root       23 Feb  3 21:18 libtbbmalloc_proxy.so -> libtbbmalloc_proxy.so.2
lrwxrwxrwx.  1 root root       17 Feb  3 21:18 libtbbmalloc.so -> libtbbmalloc.so.2
lrwxrwxrwx.  1 root root       11 Feb  3 21:18 libtbb.so -> libtbb.so.2
-rwxr-xr-x.  1 root root    11400 Nov 20  2015 libtbbmalloc_proxy.so.2
-rwxr-xr-x.  1 root root   108728 Nov 20  2015 libtbbmalloc.so.2
-rwxr-xr-x.  1 root root   216424 Nov 20  2015 libtbb.so.2

(base) [root@localhost include]# ls -lt|grep tbb
drwxr-xr-x.  5 root root   4096 Feb  3 21:18 tbb

(base) [root@localhost include]# export|grep tbb|grep usr/include/tbb
declare -x OLDPWD="/usr/include/tbb"

We can either remove tbb from yum and copy related file from intel dir to the paths, or build from source and add to env.

git clone https://github.com/intel/tbb
cd tbb
make all
mkdir /etc/modulefiles/tbb/
vi /etc/modulefiles/tbb/2020.0.166
#%Module 1.0
#
#  Intel MPI module for use with 'environment-modules' package:
#

#set TBB_DIR /opt/intel/compilers_and_libraries_2020.0.166/linux/tbb
set             TBB_DIR         /usr/syssoft/tbb

#prepend-path   LIBRARY_PATH $TBB_DIR/lib/intel64/gcc4.8
#prepend-path   CPATH $TBB_DIR/include
#prepend-path   LD_LIBRARY_PATH $TBB_DIR/lib/intel64/gcc4.8

prepend-path    LIBRARY_PATH $TBB_DIR/build/linux_intel64_gcc_cc8.3.1_libc2.17_kernel3.10.0_release
prepend-path    CPATH $TBB_DIR/include
prepend-path    LD_LIBRARY_PATH $TBB_DIR/build/linux_intel64_gcc_cc8.3.1_libc2.17_kernel3.10.0_release

#prepend-path   TBB_VERSION     2020
prepend-path    TBB_DIR $TBB_DIR
prepend-path    TBB_INCLUDE_DIRS  $TBB_DIR/include
prepend-path    TBB_USER_INCLUDE_DIRS   $TBB_DIR/include
prepend-path    TBB_LIBRARIES   $TBB_DIR/build/linux_intel64_gcc_cc8.3.1_libc2.17_kernel3.10.0_release
#prepend-path   TBB_LIBRARY_DEBUG

The current master branch of deal.ii cannot recognize tbb version correctly(fixed in develop branch
), since candi will fetch that anyway, we can modify the header :
vi /usr/syssoft/tbb/include/tbb/tbb_stddef.h
And change
#define TBB_VERSION_MAJOR 2020
to
#define TBB_VERSION_MAJOR 2019
Then use candi again (don't forget to remove /tmp/build/deal.ii_xxx)

-- Include /usr/syssoft/bin/tmp/unpack/deal.II-v9.1.1/cmake/configure/configure_1_threads.cmake
-- Found TBB_LIBRARY
-- TBB_DEBUG_LIBRARY not found! Call:
--     FIND_LIBRARY(TBB_DEBUG_LIBRARY NAMES tbb_debug HINTS /usr/syssoft/tbb PATH_SUFFIXES lib lib64 lib)
-- Found TBB_INCLUDE_DIR
--   TBB_VERSION: 9.1
--   TBB_LIBRARIES: /usr/syssoft/tbb/build/linux_intel64_gcc_cc8.3.1_libc2.17_kernel3.10.0_release/libtbb.so
--   TBB_INCLUDE_DIRS: /usr/syssoft/tbb/include
--   TBB_USER_INCLUDE_DIRS: /usr/syssoft/tbb/include
-- Found TBB
-- Performing Test DEAL_II_HAVE_MT_POSIX_BARRIERS
-- Performing Test DEAL_II_HAVE_MT_POSIX_BARRIERS - Success
-- DEAL_II_WITH_THREADS successfully set up with external dependencies.

Install boost 1.7, 1.6.9,...
https://gist.github.com/1duo/2d1d851f76f8297be264b52c1f31a2ab
export BOOST_DIR=/usr/local/boost
Unluckily doesn't work:
New Boost version may have incorrect or missing dependencies and imported targets

Finally I succeeded with spack:

cd /usr/syssoft
# a modified version of spack is required
git clone https://github.com/spack/spack.git spack
export SPACK_ROOT=/usr/syssoft/spack
export PATH="$SPACK_ROOT/bin:$PATH"
module unload mpi/gnu/openmpi/3.1.3
module load mpi/intel/intel_mpi/2020.0.166

We need to setup the spack env:

#Add compilers
spack compiler find
vi ~/.spack/linux/compilers.yaml
compilers:
- compiler:
    environment: {}
    extra_rpaths:
     - /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin
     - /opt/intel/compilers_and_libraries_2020.0.166/linux/ipp/lib/intel64
    flags: {}
    modules: [mpi/intel/intel_mpi/2020.0.166]
    operating_system: centos7
    paths:
      cc: /opt/intel/compilers_and_libraries_2020.0.166/linux/bin/intel64/icc
      cxx: /opt/intel/compilers_and_libraries_2020.0.166/linux/bin/intel64/icpc
      f77: /opt/intel/compilers_and_libraries_2020.0.166/linux/bin/intel64/ifort
      fc: /opt/intel/compilers_and_libraries_2020.0.166/linux/bin/intel64/ifort
    spec: intel@19.1.0.166
    target: x86_64
vi ~/.spack/linux/packages.yaml
packages:
  all:
    compiler: [intel]
    providers:
      mpi: [intel-mpi]
      blas: [intel-mkl]
      lapack: [intel-mkl]
      scalapack: [intel-mkl]
  intel-mpi:
    version: [2020.0.166]
    paths:
      intel-mpi@2020.0.166%intel@19.1.0.166: /opt/intel/
    buildable: False
  intel-mkl:
    version: [2020.0.166]
    paths:
      intel-mkl@2020.0.166%intel@19.1.0.166: /opt/intel/
    buildable: False
  cmake:
    version: [3.9.6]
    paths:
      cmake@3.9.6%intel@19.1.0.166: /usr/syssoft/cmake3.9.6/cmake-3.9.6-Linux-x86_64/
    buildable: False
  dealii:
    variants: +optflags~python

https://github.com/spack/spack/issues/7670
https://github.com/spack/spack/issues/8292
https://github.com/spack/spack/issues/8915
https://github.com/spack/spack/pull/8976

Python3 will cause CmakePackge not defined

conda activate py27# conda env list to show python2 path

replace the the source file url for bzip2 and gsl

In var/spack/repos/builtin/packages/bzip2/package.py
https://downloads.sourceforge.net/project/bzip2/bzip2-1.0.6.tar.gz

In var/spack/repos/builtin/packages/gsl/package.py
https://ftp.gnu.org/gnu/gsl/gsl-2.4.tar.gz

Then start installing

# -L to avoid lock error in spack, --debug to output empty error
spack -L --debug install -j 112 dealii%intel~assimp~petsc~slepc+mpi^intel-mpi^intel-mkl
# set deal.ii env for aspect
export DEAL_II_DIR=$(spack location -i dealii)
#Then install aspect with intelmpi

deail.ii and other package path configuration:
add in /etc/profile:

source /opt/bin/configuration/enable.sh

After the deal.ii installation, we can test that the installation works by compiling the step-32 with non-root user

cd $DEAL_II_DIR/examples/step-32. 
cmake . && make 
mpirun -n 2 ./step-32.

Install aspect

git clone https://github.com/geodynamics/aspect.git
cd aspect
mkdir build; cd build; 
cmake -DDEAL_II_DIR=/u/username/deal-installed/ ..
make -j 112

Test aspect:
mpirun -n Number ./aspect path/to/prm

convection_box.prm:56

*** Timestep 1071:  t=0.5 seconds
   Solving temperature system... 0 iterations.
   Solving Stokes system... 0+0 iterations.

   Postprocessing:
     RMS, max velocity:                  42.9 m/s, 69.4 m/s
     Temperature min/avg/max:            0 K, 0.5 K, 1 K
     Heat fluxes through boundary parts: 0 W, 0 W, -4.885 W, 4.885 W

Termination requested by criterion: end time
+----------------------------------------------+------------+------------+
| Total wallclock time elapsed since start     |      66.8s |            |
|                                              |            |            |
| Section                          | no. calls |  wall time | % of total |
+----------------------------------+-----------+------------+------------+
| Assemble Stokes system           |      1072 |       4.8s |       7.2% |
| Assemble temperature system      |      1072 |      13.1s |        20% |
| Build Stokes preconditioner      |         1 |    0.0326s |         0% |
| Build temperature preconditioner |      1072 |     0.342s |      0.51% |
| Initialization                   |         1 |     0.196s |      0.29% |
| Postprocessing                   |      1072 |      19.2s |        29% |
| Setup dof systems                |         1 |    0.0459s |         0% |
| Setup initial conditions         |         1 |    0.0211s |         0% |
| Setup matrices                   |         1 |    0.0199s |         0% |
| Solve Stokes system              |      1072 |      16.3s |        24% |
| Solve temperature system         |      1072 |      3.36s |         5% |
+----------------------------------+-----------+------------+------------+

Running with 64 Threads is actually slower:

+----------------------------------------------+------------+------------+
| Total wallclock time elapsed since start     |       112s |            |
|                                              |            |            |
| Section                          | no. calls |  wall time | % of total |
+----------------------------------+-----------+------------+------------+
| Assemble Stokes system           |      1072 |      6.25s |       5.6% |
| Assemble temperature system      |      1072 |      15.8s |        14% |
| Build Stokes preconditioner      |         1 |    0.0419s |         0% |
| Build temperature preconditioner |      1072 |     0.452s |      0.41% |
| Initialization                   |         1 |       0.2s |      0.18% |
| Postprocessing                   |      1072 |        30s |        27% |
| Setup dof systems                |         1 |    0.0627s |         0% |
| Setup initial conditions         |         1 |    0.0362s |         0% |
| Setup matrices                   |         1 |    0.0388s |         0% |
| Solve Stokes system              |      1072 |      38.5s |        35% |
| Solve temperature system         |      1072 |      8.47s |       7.6% |
+----------------------------------+-----------+------------+------------+

Intel mpi is slower during pre/post-processing, but faster when solving equations. In the end it is slower than openmpi with small system.

$ mpirun  -n 56 ./aspect ../cookbooks/convection-box.prm
+---------------------------------------------+------------+------------+
| Total wallclock time elapsed since start    |      83.2s |            |
|                                             |            |            |
| Section                         | no. calls |  wall time | % of total |
+---------------------------------+-----------+------------+------------+
| Assemble Stokes system          |      1072 |      6.37s |       7.7% |
| Assemble temperature system     |      1072 |      18.5s |        22% |
| Build Stokes preconditioner     |         1 |    0.0334s |         0% |
| Build temperature preconditioner|      1072 |     0.349s |      0.42% |
| Initialization                  |         1 |     0.257s |      0.31% |
| Postprocessing                  |      1072 |      25.1s |        30% |
| Setup dof systems               |         1 |    0.0618s |         0% |
| Setup initial conditions        |         1 |    0.0304s |         0% |
| Setup matrices                  |         1 |    0.0222s |         0% |
| Solve Stokes system             |      1072 |      13.3s |        16% |
| Solve temperature system        |      1072 |      2.22s |       2.7% |
+---------------------------------+-----------+------------+------------+

post-installation problem

convection_box.prm:112

mpirun --oversubscribe -n 112 ./aspect ../cookbooks/convection-box.prm

Hyperthread cannot use more than 64 threads like openmpi(3) with --use-hwthread-cpus or --oversubscribe
https://www.cnblogs.com/Jay-CFD/p/8848268.html
https://www.open-mpi.org/doc/v3.0/man1/mpirun.1.php

but we can use 112 threads for step-32 of deal.ii. We can also run two aspect with 64 and 48 threads.
Unsolved in aspect?

the stack limit does not help
https://blog.csdn.net/gatieme/article/details/51058797

Look at limit of current user:

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1416082
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 6000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

change ulimit and refresh as root:

vi /etc/security/limits.conf
# add to the end
xbao hard nproc 9999999
xbao soft nproc 9999999

exit terminal and login again:

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1416082
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 9999999
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

The error message changed from

Assertion is_set(s) failed on line 214 of file /home/app/tmp/unpack/deal.II-v9.1.1/bundled/tbb-2018_U2/src/tbb/governor.cpp
Detailed description: Attempt to terminate non-local scheduler instance
thread_monitor Resource temporarily unavailable in pthread_create

to

ML::FATAL ERROR:: ML::FATAL ERROR:: 1, /opt/bin/tmp/unpack/Trilinos-trilinos-release-12-10-1/packages/ml/src/Utils/ml_MultiLevelPreconditioner_NullSpace.cpp, line 98

When we change to another inputfile, error disappears.

$mpirun --oversubscribe -n 112 ./aspect ../cookbooks/S20RTS.prm
-----------------------------------------------------------------------------
-- This is ASPECT, the Advanced Solver for Problems in Earth's ConvecTion.
--     . version 2.2.0-pre (master, 8ebc364)
--     . using deal.II 9.1.1
--     .       with 32 bit indices and vectorization level 1 (128 bits)
--     . using Trilinos 12.10.1
--     . using p4est 2.2.0
--     . running in DEBUG mode
--     . running with 112 MPI processes
-----------------------------------------------------------------------------

-----------------------------------------------------------------------------
-- For information on how to cite ASPECT, see:
--   https://aspect.geodynamics.org/citing.html?ver=2.2.0-pre&sha=8ebc364&src=code
-----------------------------------------------------------------------------
Number of active cells: 6,144 (on 3 levels)
Number of degrees of freedom: 215,962 (156,774+6,930+52,258)

*** Timestep 0:  t=0 years
   Solving temperature system... 0 iterations.
   Rebuilding Stokes preconditioner...
   Solving Stokes system... 102+0 iterations.

   Postprocessing:

     Model domain depth (m):                        2.89e+06
     Temperature contrast across model domain (K):  1000
     Reference depth (m):                           0
     Reference temperature (K):                     1600
     Reference pressure (Pa):                       0
     Reference gravity (m/s^2):                     10
     Reference density (kg/m^3):                    3300
     Reference thermal expansion coefficient (1/K): 3e-05
     Reference specific heat capacity (J/(K*kg)):   1250
     Reference thermal conductivity (W/(m*K)):      4.125
     Reference viscosity (Pa*s):                    1e+21
     Reference thermal diffusivity (m^2/s):         1e-06
     Rayleigh number:                               2.38962e+07

     RMS, max velocity:                  0.992 m/year, 1.67 m/year
     Writing heat flux map:              output-S20RTS/heat_flux.00000
     Heat fluxes through boundary parts: -6.303e+11 W, -1.46e+12 W
     Density at top/bottom of domain:    3359 kg/m^3, 3260 kg/m^3
     Pressure at top/bottom of domain:   -1.873 Pa, 9.537e+10 Pa
     Computing dynamic topography
     Writing geoid anomaly:              output-S20RTS/geoid_anomaly.00000
     Writing graphical output:           output-S20RTS/solution/solution-00000

Termination requested by criterion: end time


+----------------------------------------------+------------+------------+
| Total wallclock time elapsed since start     |      44.4s |            |
|                                              |            |            |
| Section                          | no. calls |  wall time | % of total |
+----------------------------------+-----------+------------+------------+
| Assemble Stokes system           |         1 |      2.82s |       6.4% |
| Assemble temperature system      |         1 |      1.77s |         4% |
| Build Stokes preconditioner      |         1 |      2.21s |         5% |
| Build temperature preconditioner |         1 |    0.0236s |         0% |
| Initialization                   |         1 |      0.52s |       1.2% |
| Postprocessing                   |         1 |      28.4s |        64% |
| Setup dof systems                |         1 |      1.78s |         4% |
| Setup initial conditions         |         1 |       2.3s |       5.2% |
| Setup matrices                   |         1 |     0.393s |      0.89% |
| Solve Stokes system              |         1 |      2.32s |       5.2% |
| Solve temperature system         |         1 |    0.0452s |       0.1% |
+----------------------------------+-----------+------------+------------+

Compared to 56 cores:

$ mpirun  -n 56 ./aspect ../cookbooks/S20RTS.prm
-----------------------------------------------------------------------------
-- This is ASPECT, the Advanced Solver for Problems in Earth's ConvecTion.
--     . version 2.2.0-pre (master, 8ebc364)
--     . using deal.II 9.1.1
--     .       with 32 bit indices and vectorization level 1 (128 bits)
--     . using Trilinos 12.10.1
--     . using p4est 2.2.0
--     . running in DEBUG mode
--     . running with 56 MPI processes
-----------------------------------------------------------------------------

-----------------------------------------------------------------------------
-- For information on how to cite ASPECT, see:
--   https://aspect.geodynamics.org/citing.html?ver=2.2.0-pre&sha=8ebc364&src=code
-----------------------------------------------------------------------------
Number of active cells: 6,144 (on 3 levels)
Number of degrees of freedom: 215,962 (156,774+6,930+52,258)

*** Timestep 0:  t=0 years
   Solving temperature system... 0 iterations.
   Rebuilding Stokes preconditioner...
   Solving Stokes system... 193+0 iterations.

   Postprocessing:

     Model domain depth (m):                        2.89e+06
     Temperature contrast across model domain (K):  1000
     Reference depth (m):                           0
     Reference temperature (K):                     1600
     Reference pressure (Pa):                       0
     Reference gravity (m/s^2):                     10
     Reference density (kg/m^3):                    3300
     Reference thermal expansion coefficient (1/K): 3e-05
     Reference specific heat capacity (J/(K*kg)):   1250
     Reference thermal conductivity (W/(m*K)):      4.125
     Reference viscosity (Pa*s):                    1e+21
     Reference thermal diffusivity (m^2/s):         1e-06
     Rayleigh number:                               2.38962e+07

     RMS, max velocity:                  0.395 m/year, 0.791 m/year
     Writing heat flux map:              output-S20RTS/heat_flux.00000
     Heat fluxes through boundary parts: -6.31e+11 W, -1.451e+12 W
     Density at top/bottom of domain:    3359 kg/m^3, 3260 kg/m^3
     Pressure at top/bottom of domain:   -1.818 Pa, 9.537e+10 Pa
     Computing dynamic topography
     Writing geoid anomaly:              output-S20RTS/geoid_anomaly.00000
     Writing graphical output:           output-S20RTS/solution/solution-00000

Termination requested by criterion: end time


+----------------------------------------------+------------+------------+
| Total wallclock time elapsed since start     |      46.8s |            |
|                                              |            |            |
| Section                          | no. calls |  wall time | % of total |
+----------------------------------+-----------+------------+------------+
| Assemble Stokes system           |         1 |      3.16s |       6.8% |
| Assemble temperature system      |         1 |      1.75s |       3.7% |
| Build Stokes preconditioner      |         1 |      1.62s |       3.5% |
| Build temperature preconditioner |         1 |    0.0104s |         0% |
| Initialization                   |         1 |     0.318s |      0.68% |
| Postprocessing                   |         1 |      31.5s |        67% |
| Setup dof systems                |         1 |      1.46s |       3.1% |
| Setup initial conditions         |         1 |       2.6s |       5.5% |
| Setup matrices                   |         1 |     0.319s |      0.68% |
| Solve Stokes system              |         1 |      2.51s |       5.4% |
| Solve temperature system         |         1 |    0.0186s |         0% |
+----------------------------------+-----------+------------+------------+

No difference can be observed between openmpi-3.1.3 and the version used inside intel's directory.
After enabling native optimization:(AVX, AVX512?)

$mpirun  -n 56 ./aspect ../cookbooks/S20RTS.prm
-----------------------------------------------------------------------------
-- This is ASPECT, the Advanced Solver for Problems in Earth's ConvecTion.
--     . version 2.2.0-pre (master, ef542ec)
--     . using deal.II 9.2.0-pre (master, 8fb1f08)
--     .       with 32 bit indices and vectorization level 3 (512 bits)
--     . using Trilinos 12.10.1
--     . using p4est 2.2.0
--     . running in DEBUG mode
--     . running with 56 MPI processes
-----------------------------------------------------------------------------
......
+----------------------------------------------+------------+------------+
| Total wallclock time elapsed since start     |        42s |            |
|                                              |            |            |
| Section                          | no. calls |  wall time | % of total |
+----------------------------------+-----------+------------+------------+
| Assemble Stokes system           |         1 |      2.46s |       5.9% |
| Assemble temperature system      |         1 |      1.39s |       3.3% |
| Build Stokes preconditioner      |         1 |      1.45s |       3.4% |
| Build temperature preconditioner |         1 |    0.0115s |         0% |
| Initialization                   |         1 |     0.936s |       2.2% |
| Postprocessing                   |         1 |        28s |        67% |
| Setup dof systems                |         1 |      1.37s |       3.3% |
| Setup initial conditions         |         1 |      3.29s |       7.8% |
| Setup matrices                   |         1 |     0.405s |      0.96% |
| Solve Stokes system              |         1 |      1.31s |       3.1% |
| Solve temperature system         |         1 |    0.0204s |         0% |
+----------------------------------+-----------+------------+------------+

Successfully build with intel mpi!

$ mpirun  -n 56 ./aspect ../cookbooks/S20RTS.prm
-----------------------------------------------------------------------------
-- This is ASPECT, the Advanced Solver for Problems in Earth's ConvecTion.
--     . version 2.2.0-pre (master, 8ebc364)
--     . using deal.II 9.0.0
--     .       with 32 bit indices and vectorization level 3 (512 bits)
--     . using Trilinos 12.12.1
--     . using p4est 2.0.0
--     . running in DEBUG mode
--     . running with 56 MPI processes
-----------------------------------------------------------------------------


-----------------------------------------------------------------------------
The output directory <output-S20RTS/> provided in the input file appears not to exist.
ASPECT will create it for you.
-----------------------------------------------------------------------------


-----------------------------------------------------------------------------
-- For information on how to cite ASPECT, see:
--   https://aspect.geodynamics.org/citing.html?ver=2.2.0-pre&sha=8ebc364&src=code
-----------------------------------------------------------------------------
Number of active cells: 6,144 (on 3 levels)
Number of degrees of freedom: 215,962 (156,774+6,930+52,258)

*** Timestep 0:  t=0 years
   Solving temperature system... 0 iterations.
   Rebuilding Stokes preconditioner...
   Solving Stokes system... 193+0 iterations.

   Postprocessing:

     Model domain depth (m):                        2.89e+06
     Temperature contrast across model domain (K):  1000
     Reference depth (m):                           0
     Reference temperature (K):                     1600
     Reference pressure (Pa):                       0
     Reference gravity (m/s^2):                     10
     Reference density (kg/m^3):                    3300
     Reference thermal expansion coefficient (1/K): 3e-05
     Reference specific heat capacity (J/(K*kg)):   1250
     Reference thermal conductivity (W/(m*K)):      4.125
     Reference viscosity (Pa*s):                    1e+21
     Reference thermal diffusivity (m^2/s):         1e-06
     Rayleigh number:                               2.38962e+07

     RMS, max velocity:                  0.395 m/year, 0.791 m/year
     Writing heat flux map:              output-S20RTS/heat_flux.00000
     Heat fluxes through boundary parts: -6.31e+11 W, -1.451e+12 W
     Density at top/bottom of domain:    3359 kg/m^3, 3260 kg/m^3
     Pressure at top/bottom of domain:   -1.818 Pa, 9.537e+10 Pa
     Computing dynamic topography
     Writing geoid anomaly:              output-S20RTS/geoid_anomaly.00000
     Writing graphical output:           output-S20RTS/solution/solution-00000

Termination requested by criterion: end time


+---------------------------------------------+------------+------------+
| Total wallclock time elapsed since start    |      69.7s |            |
|                                             |            |            |
| Section                         | no. calls |  wall time | % of total |
+---------------------------------+-----------+------------+------------+
| Assemble Stokes system          |         1 |      4.81s |       6.9% |
| Assemble temperature system     |         1 |      2.69s |       3.9% |
| Build Stokes preconditioner     |         1 |      2.08s |         3% |
| Build temperature preconditioner|         1 |   0.00984s |         0% |
| Initialization                  |         1 |     0.419s |       0.6% |
| Postprocessing                  |         1 |      47.6s |        68% |
| Setup dof systems               |         1 |      2.92s |       4.2% |
| Setup initial conditions        |         1 |       3.6s |       5.2% |
| Setup matrices                  |         1 |     0.468s |      0.67% |
| Solve Stokes system             |         1 |      2.43s |       3.5% |
| Solve temperature system        |         1 |    0.0194s |         0% |
+---------------------------------+-----------+------------+------------+

Mpi with hyperthreading doesn't need overscribe flag here(not included in intelmpi):

$ mpirun  -n 112 ./aspect ../cookbooks/S20RTS.prm
-----------------------------------------------------------------------------
-- This is ASPECT, the Advanced Solver for Problems in Earth's ConvecTion.
--     . version 2.2.0-pre (master, 8ebc364)
--     . using deal.II 9.0.0
--     .       with 32 bit indices and vectorization level 3 (512 bits)
--     . using Trilinos 12.12.1
--     . using p4est 2.0.0
--     . running in DEBUG mode
--     . running with 112 MPI processes
-----------------------------------------------------------------------------

-----------------------------------------------------------------------------
-- For information on how to cite ASPECT, see:
--   https://aspect.geodynamics.org/citing.html?ver=2.2.0-pre&sha=8ebc364&src=code
-----------------------------------------------------------------------------
Number of active cells: 6,144 (on 3 levels)
Number of degrees of freedom: 215,962 (156,774+6,930+52,258)

*** Timestep 0:  t=0 years
   Solving temperature system... 0 iterations.
   Rebuilding Stokes preconditioner...
   Solving Stokes system... 102+0 iterations.

   Postprocessing:

     Model domain depth (m):                        2.89e+06
     Temperature contrast across model domain (K):  1000
     Reference depth (m):                           0
     Reference temperature (K):                     1600
     Reference pressure (Pa):                       0
     Reference gravity (m/s^2):                     10
     Reference density (kg/m^3):                    3300
     Reference thermal expansion coefficient (1/K): 3e-05
     Reference specific heat capacity (J/(K*kg)):   1250
     Reference thermal conductivity (W/(m*K)):      4.125
     Reference viscosity (Pa*s):                    1e+21
     Reference thermal diffusivity (m^2/s):         1e-06
     Rayleigh number:                               2.38962e+07

     RMS, max velocity:                  0.992 m/year, 1.67 m/year
     Writing heat flux map:              output-S20RTS/heat_flux.00000
     Heat fluxes through boundary parts: -6.303e+11 W, -1.46e+12 W
     Density at top/bottom of domain:    3359 kg/m^3, 3260 kg/m^3
     Pressure at top/bottom of domain:   -1.873 Pa, 9.537e+10 Pa
     Computing dynamic topography
     Writing geoid anomaly:              output-S20RTS/geoid_anomaly.00000
     Writing graphical output:           output-S20RTS/solution/solution-00000

Termination requested by criterion: end time


+---------------------------------------------+------------+------------+
| Total wallclock time elapsed since start    |      56.5s |            |
|                                             |            |            |
| Section                         | no. calls |  wall time | % of total |
+---------------------------------+-----------+------------+------------+
| Assemble Stokes system          |         1 |      3.88s |       6.9% |
| Assemble temperature system     |         1 |       2.4s |       4.2% |
| Build Stokes preconditioner     |         1 |      1.94s |       3.4% |
| Build temperature preconditioner|         1 |   0.00861s |         0% |
| Initialization                  |         1 |     0.498s |      0.88% |
| Postprocessing                  |         1 |      37.5s |        66% |
| Setup dof systems               |         1 |      2.93s |       5.2% |
| Setup initial conditions        |         1 |      2.83s |         5% |
| Setup matrices                  |         1 |     0.416s |      0.74% |
| Solve Stokes system             |         1 |       1.7s |         3% |
| Solve temperature system        |         1 |    0.0301s |         0% |
+---------------------------------+-----------+------------+------------+

It may be related the 32/64 version of deai.ii used here:(turns out it doesn't matter)
maybe openmpi with ucx 1.7? openmpi4?

Turns out doesn't matter.

UCX WARN UCP
>> version is incompatible, required: 1.5, actual: 1.4 (release 0)

Latest version from yum is 1.4, build from source.
NUMA header needs to be installed :
sudo yum install -y numactl-devel

git clone https://github.com/openucx/ucx.git ucx-git
cd ucx-git
./autogen.sh
./contrib/configure-release --prefix=/usr/local/bin/ucx/
make -j112

modulefile for ucx

The warning disappered, but hyperthreading limit exists.

install openmpi:
https://www.open-mpi.org/faq/?category=building#easy-build
https://bitsanddragons.wordpress.com/2017/12/18/install-openmpi-3-0-0-with-ucx-and-infiniband-support-on-centos-7/
https://github.com/openucx/ucx/wiki/OpenMPI-and-OpenSHMEM-installation-with-UCX

./configure --with-ucx=/usr/local/bin/ucx/ --prefix=/usr/local/openmpi --enable-mca-no-build=btl-uct
make all install

https://bitsanddragons.wordpress.com/2017/12/18/install-openmpi-3-0-0-with-ucx-and-infiniband-support-on-centos-7/
https://docs.microsoft.com/bs-latn-ba/azure/virtual-machines/workloads/hpc/setup-mpi
mpicc series not responding, maybe the module loaded is not correct
cannot

Group and user, shared directory

#Add group
sudo groupadd admins
#Append user to group
sudo usermod -a -G admins user
#assign shared dir to group
sudo chgrp -R admins /path/to/dir
sudo chmod -R 777 /path/to/dir
ls -l
-rwxrwxrwx.  1 root admins

add admin user

useradd -m -d /home/clb clb
passwd clb
#uncomment wheel line in /etc/sudoers
usermod -a -G wheel clb
su clb;groups
useradd -m -d /home/mbog mbog
passwd mbog
cd /data;md mbog
chown -R mbog mbog
sudo usermod -a -G share mbog
su mbog

as mbog:

cd /usr/syssoft/autojump
./install.sh
conda init

add the following line(s) to ~/.bashrc:

    [[ -s /home/zsc/.autojump/etc/profile.d/autojump.sh ]] && source 

/home/zsc/.autojump/etc/profile.d/autojump.sh
change the access for the shared folder

chgrp -R share shared/

Pbs Pro open source version

https://www.misaraty.com/204
https://github.com/PBSPro/pbspro/blob/master/INSTALL
http://community.pbspro.org/t/pbs-dataservice-not-running/1556/4
http://community.pbspro.org/t/pbs-pro-public-version-installation-on-ubuntu-18-04-2-lts-bionic-beaver/1500/7
http://community.pbspro.org/t/the-problem-with-running-on-a-clean-virtual-machine/1612/5
https://blog.csdn.net/edide/article/details/52389946
https://www.awaimai.com/762.html
http://community.pbspro.org/t/unable-to-start-pbs-service-selinux-enforcing/1267

Download rpm file and install:

#Prerequisites
yum install -y yum install -y  rpm-build libtool hwloc-devel \
libX11-devel libXt-devel libedit-devel libical-devel \
ncurses-devel perl postgresql-devel postgresql-contrib python3-devel tcl-devel \
tk-devel swig expat-devel openssl-devel libXext libXft \
autoconf automake yum install -y expat libedit postgresql-server postgresql-contrib  \
sendmail tcl tk libical
#Turn off firewall and SElinux 
systemctl disable firewalld.service 
# In /etc/selinux/config , change
SELINUX=disabled  
reboot
wget https://github.com/PBSPro/pbspro/releases/download/v19.1.3/pbspro_19.1.3.centos_7.zip
unzip *.zip
cd pbspro_19.1.3.centos_7
yum -y install pbspro-server-19.1.3-0.x86_64.rpm
chmod 4755 /opt/pbs/sbin/pbs_iff /opt/pbs/sbin/pbs_rcp
chown -R postgres:postgres /var/spool/pbs/datastore
#the option -c “PBS datastore service user” is not accepted
useradd -m -d /home/pbsdata -s /bin/bash  -U pbsdata
# Add in /etc/profile
export PATH="/opt/pbs/bin:$PATH"
# For single-node system, change in /etc/pbs.conf
PBS_START_MOM=1
# In /etc/hosts, add
xxx.xxx.31.179 localhost localhost.localdomain
/etc/init.d/pbs start
qmgr -c "create node localhost"
qmgr -c 'create queue kgb1'
qmgr -c 'set server default_queue = kgb1'
qmgr -c 'set queue kgb1 queue_type = execution'
qmgr -c 'set queue kgb1 enabled = true'
qmgr -c 'set queue kgb1 started = true'
[root@localhost ~]# /etc/init.d/pbs restart
Restarting PBS
Stopping PBS
Shutting server down with qterm.
PBS server - was pid: 21085
PBS mom - was pid: 20309
PBS comm - was pid: 20281
Waiting for shutdown to complete
Starting PBS
PBS comm
/opt/pbs/sbin/pbs_comm ready (pid=24872), Proxy Name:localhost:17001, Threads:4
PBS mom
Creating usage database for fairshare.
PBS sched
Connecting to PBS dataservice.....connected to PBS dataservice@localhost
Licenses valid for 10000000 Floating hosts
PBS server
[root@localhost ~]# /etc/init.d/pbs status
pbs_server is pid 25679
pbs_mom is pid 24901
pbs_sched is pid 24913
pbs_comm is 24872

Test using non-root user:

$echo 'sleep 30' | qsub
$qstat
Job id            Name             User              Time Use S Queue
----------------  ---------------- ----------------  -------- - -----
0.localhost       STDIN            xbao              00:00:01 R kgb1

Enable SElinux and firewall as root

systemctl enable firewalld.service
systemctl start firewalld.service
firewall-cmd --permanent --zone=public --add-port=15001-15004/tcp
firewall-cmd --permanent --zone=public --add-port=15007/tcp
firewall-cmd --permanent --zone=public --add-port=17001/tcp
# In /etc/selinux/config , change
SELINUX=permissive

Common commands

$qstat -a
localhost:
                                                            Req'd  Req'd   Elap
Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
0.localhost     xbao     kgb1     STDIN       25685   1   1    --    --  R 00:00
$pbsnodes -av
localhost
     Mom = localhost
     Port = 15002
     pbs_version = 19.1.3
     ntype = PBS
     state = free
     pcpus = 112
     resources_available.arch = linux
     resources_available.host = localhost
     resources_available.mem = 362666588kb
     resources_available.ncpus = 112
     resources_available.vnode = localhost
     resources_assigned.accelerator_memory = 0kb
     resources_assigned.hbmem = 0kb
     resources_assigned.mem = 0kb
     resources_assigned.naccelerators = 0
     resources_assigned.ncpus = 0
     resources_assigned.vmem = 0kb
     resv_enable = True
     sharing = default_shared
     last_state_change_time = Wed Feb  5 20:42:04 2020
     last_used_time = Wed Feb  5 20:42:35 2020

http://community.pbspro.org/t/stdout-and-stderr-in-submisson-directory/687/6
temporary output dir:
/var/spool/pbs/spool/

let the user to see other user's job

#qmgr
Max open servers: 49
Qmgr: set server query_other_jobs = True
Qmgr: quit

As a normal user:

$qstat -a
localhost:
                                                            Req'd  Req'd   Elap
Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
60.localhost    clb      kgb1     opx_pb05_l  12911   1  56    --  240:0 R 00:10

Jupyter configuration

https://blog.csdn.net/vola9527/article/details/80744540

conda env export > environment.yaml
https://github.com/ipython-contrib/jupyter_contrib_nbextensions

nbextensions need to be installed in base env.

sometimes jupyterhub cannot be closed fully. When you open a new one:

HTTP 403: Forbidden error when starting jupyterhub

Just kill all configurable-http-proxy
use

ps aux | grep configurable-http-proxy

No-password login

https://zhuanlan.zhihu.com/p/28423720

#local computer:
ssh-keygen -t rsa
scp ~/.ssh/id_rsa.pub usr@remotehost:~/.ssh
#remote host
cat ~/.ssh/id_rsa.pub  >> ~/.ssh/authorized_keys
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
#Test on local computer
ssh usr@remotehost
#or
rsync -avz --exclude=zz /src_test/ username@B_IP:/dest_test/

Oh my bash

oh my zsh is not fully compatible with bash series env file and may cause serious problem(wd does not work, etc.)
https://stackoverflow.com/questions/764600/how-can-you-export-your-bashrc-to-zshrc
https://stackoverflow.com/questions/26616003/shopt-command-not-found-in-bashrc-after-shell-updation
https://yucongding.com/2019/03/05/oh-my-zsh/

Use oh-my-bash instead:

sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmybash/oh-my-bash/master/tools/install.sh)"

If we attach old bashrc to new file, abrt with report problems like this:

bash: /root/.cache/abrt/lastnotification.qIYhOrz9: cannot overwrite existing file
'abrt-cli status' timed out

Simply modify the last paragraph in the file /etc/profile.d/abrt-console-notification.sh:

# always update the lastnotification
if [ -f "$TMPPATH" ]; then
    # Be quite in case of errors and don't scare users by strange error messages.
    date +%s >> "$TMPPATH" 2>"$ABRT_DEBUG_LOG"
    mv -f "$TMPPATH" "$SINCEFILE" >"$ABRT_DEBUG_LOG" 2>&1
fi

#timeout 10s abrt-cli status --since="$SINCE" 2>"$ABRT_DEBUG_LOG" || echo "'abrt-cli status' timed out"

In agnoster theme, to show conda env, modify agnoster.theme.sh:
https://github.com/diegocaro/agnoster-zsh-theme/blob/master/agnoster.zsh-theme
https://github.com/agnoster/agnoster-zsh-theme/pull/24

gnuplot

#yum install gnuplot -y
$gnuplot

    G N U P L O T
    Version 4.6 patchlevel 2    last modified 2013-03-14
    Build System: Linux x86_64

    Copyright (C) 1986-1993, 1998, 2004, 2007-2013
    Thomas Williams, Colin Kelley and many others

    gnuplot home:     http://www.gnuplot.info
    faq, bugs, etc:   type "help FAQ"
    immediate help:   type "help"  (plot window: hit 'h')

Paraview

Download binary release
unzip, and run paraview with

cd path/to/paraview/root
# run paraview locally with default nvidia OpenGL
lib/mpiexec bin/paraview
# run paraview with mesa, works for vnc,ssh -X
cd bin; ./paraview-mesa paraview
 #run pvserver
lib/mpiexec -np=10 bin/pvserver -display :0.0 --multi-clients --server-port=11111

Paraview uses MPI by default. The best practice for remote access is paraview+vnc+mesa.

Nvidia Index can work through vnc+mesa:


image.png
image.png
$nvidia-smi
Wed Feb 19 23:10:28 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce RTX 208...  Off  | 00000000:AF:00.0 Off |                  N/A |
| 24%   31C    P8     5W / 250W |    894MiB / 11016MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      5406      G   /usr/bin/X                                    79MiB |
|    0      6883      G   /usr/bin/gnome-shell                          63MiB |
|    0     83119      C   ...-MPI-Linux-Python3.7-64bit/bin/paraview   737MiB |
+-----------------------------------------------------------------------------+

To use Nvidia Index for volume rendering with pvserver and client, we need to keep the version of software, drivers, etc. the same, and Index plugin loaded remotely and locally.

Free cached memory everyday

https://www.cnblogs.com/wade-luffy/p/6760414.html
https://blog.csdn.net/zalan01408980/article/details/80555492

crontab -e
# root:free cached memory
2 0 * * * sh -c "echo 3 > /proc/sys/vm/drop_caches"

Use crontab -l to see current settings.

Nvme tool

https://unix.stackexchange.com/questions/363212/smartmontools-with-nvme-support-on-centos-7

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 158,117评论 4 360
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 66,963评论 1 290
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 107,897评论 0 240
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,805评论 0 203
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,208评论 3 286
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,535评论 1 216
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,797评论 2 311
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,493评论 0 197
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,215评论 1 241
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,477评论 2 244
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 31,988评论 1 258
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,325评论 2 252
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 32,971评论 3 235
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,055评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,807评论 0 194
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,544评论 2 271
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,455评论 2 266

推荐阅读更多精彩内容