2. OS Configurations

General Information HERE

2.1. Linux

This section overviews the specific configurations for linux systems.

2.1.1. Linux-GUI: Mint

I’ve been running Linux Mint over the last couple of years, and have so far preferred it to Ubuntu distributions. I have also tried Elementary OS which had a nice UI, however, the inner workings of the networking did not fit my needs, and the UI was not comparable enough coming from macOS.

2.1.1.1. Installing from USB

  1. Download Image from Linux Mint Download Page,

  2. Use Balena Etcher tool to create USB install media.

  3. Insert USB key into target hardware, enter boot menu with one of F2 | F12 | del keys. Then, during installation options, choose “something else” option when prompted to erase or install beside existing OS.

  4. Partition the Harddrive using the following approximate Scheme:

    • One EFI partition with 100-500mb

    • One ext4 called ‘/’ for root OS ~ 15-30gb

    • One ext4 called ‘/home’ for all data ~ 30+gb

    • one swap ~1-4gb

    This approach relies on placing the OS on its own partition and separating the home directory from anything that is application / OS specific.

2.1.1.2. Installing Packages

Use the following to install all dependencies and base applications on the fresh linux install.

## updates & upgrades:
sudo apt-get update
sudo apt-get upgrade

## create new su account:
sudo passwd

## Install git:
sudo apt-get install \
    git-core \
    cmake \
    build-essential \
    gdb \
    vim \
    openssh-client \
    sshfs \
    cifs-utils \
    zsh \
    vtop \
    screen \
    imagemagick \
    python3-pip \
    python3-virtualenv \
    python3-tk \
    libgtest-dev \
    zlib1g-dev \
    libturbojpeg \
    libssl-dev \
    libuv-dev \
    libsm6 \
    libxext6 \
    libxrender-dev

## Change the shell to zsh:
chsh -s `which zsh`

Troubleshooting:

If getting a PAM permission error, or a which zsh invalid command errror, most likely culprit is that /etc/passwd is set to:

root:x:0:0:root:/root: which zsh this should be changed to: root:x:0:0:root:/root:/usr/bin/zsh

Log and Out for changes to take effect

2.1.1.2.1. Docker
# Install Docker
sudo apt-get install docker docker.io
# set permissions (make sure $USER is set correctly)
sudo usermod -a -G docker $USER
# reboot
sudo reboot -h now
2.1.1.2.2. Latex & Doxygen
## Firstly Install Latex & dependencies:
# cd into temp folder:
cd ~/Downloads
# download the TexLive file:
wget http://mirror.ctan.org/systems/texlive/tlnet/install-tl-unx.tar.gz
# unpack:
tar -xvf install-tl-unx.tar.g
cd install-tl-unx
# call installer:
sudo perl install-tl # not sure if sudo should be called..
# ^^^^ TAKES A LOOOOONG TIME!
# add to path (in bashrc or zshrc)
export PATH="/usr/local/texlive/2020/bin/x86_64-linux:$PATH"
# test with:
latex small2e

## Next Install Ghostscript for file exporting:
# install:
sudo apt-get install ghostscript
# test:
gs

## FINALLY Install Doxygen with graph generation support
# install using ubuntu repo:
sudo apt-get install doxygen
# may need to install 'dot' with:
sudo apt-get install graphviz

2.1.1.3. Configuring Packages

2.1.1.3.1. ZSH

First step is to set zsh to use Oh my zsh:

## Get Oh My Zsh:
wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh

TIP: to remove username from bash prompt, add the following to the bottom of .zshrc file:

if [[ $EUID == 0 ]]; then
    PROMPT="%B%F{red}root%b%f $PROMPT"
else
  PROMPT="%B%F{green}user%b%f $PROMPT"
fi

TIP: to link zsh from local user to superuser by linking $HOME/username/.oh-my-zsh and $HOME/username/.zshrc to /root/.oh-my-zsh and /root/.zshrc:

sudo ln -s $HOME/.oh-my-zsh /root/.oh-my-zsh
sudo ln -s $HOME/.zshrc /root/.zshrc
2.1.1.3.2. GIT

Github has introduced new token and two-factor based authorizations for cloning.

Please follow github's, or gitlab's to create and attach tokens to your projects.

To remove existing credentials stored using git’s credential helper, use the following:

git config --global --unset credential.helper

NOTE On a mac system, the keychain stores git’s credentials, see here.

The following is deprecated for GitHub

Use the following to enable git credential helper (
note, this may not be supported by github in near
future):

    .. code-block:: bash

        # setup credential helper:
        git config --global credential.helper store
2.1.1.3.3. GTest

Finalize the GTEST install by linking the compiled libraries:

cd /usr/src/gtest
cmake CMakeLists.txt
make
cp *.a /usr/lib
2.1.1.3.4. Pyenv

Install pyenv using the guide provided HERE.

2.1.1.3.5. Vim

To finalize VIM configuration, add Vundle Package to VIM:

git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim

To setup EN’s vim:

wget https://raw.githubusercontent.com/evgenyslab/labmanual/master/docs/source/codeSauces/vimrc -O ~/.vimrc

Once installed, in vim use :PluginInstall to install all plugins.

2.1.1.4. Hardware-Specifics

This section is meant to capture hardware specific configuraitons I’ve encountered.

2.1.1.4.1. Lenovo-Wacom Tablets

Note that for the Lenovo X1 Thinkpad with Wacom tablet, I was able to install Linux mint natively with VM player W10 edition. To get pen input to work correctly (namely, OneNote in W10), VM needs to provide control to linux for wacom pen input AND its best to disable the touch capability of the wacom tablet:

see: HERE .

xsetwacom --list devices
# prints out device list... there should be a touch

# disable finger touch:
xsetwacom --set "Wacom Intuos Pro M Finger touch" Touch off

# confirm:
xsetwacom --get "Wacom Intuos Pro M Finger touch" Touch off

This way, in the VM, windows (and host Linux) will only react to pen input, meaning that in OneNote you will not get the pen marking up the page from your palm.

2.1.2. Linux Server

ISSUES TO RESOLVE

  • [ ] docker loses containers / images on restart; seems to be known issue

  • [ ] docker can’t link gpu after restart, seems to be fixed with sudo systemctl docker stop / sudo systemctl docker start

The linux server installation and configuration is almost identical to the standard linux mint installation, with some slight changes to account for lack of X or, running headless.

The major caveat of installing a headless linux version is that there is not really a clean to do it without some monitor or visual feedback, since visual feedback is needed to verify choices and selctions.

I’ve been using the Ubuntu Server image for headless installations. This OS has been proven to work stabily in the environments I require.

The installation image can be found at the Ubuntu Server Download page.

To install from USB, see Installing from USB.

2.1.2.1. Installing Packages

sudo apt-get install git-core \
    cmake \
    hwinfo \
    build-essential \
    vim \
    zsh \
    htop \
    screen \
    libbz2-dev \
    libreadline6-dev \
    libsqlite3-dev \
    python3-pip \

wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh

# change the shell:
chsh -s $(which zsh)
# todo.. ln -s the oh my zsh folder from user to root...

TODO: Need to install samba utilities Mounting SMB drive…

2.1.3. Docker Installation

2.1.4. Nvidia Driver Installation

Get GPU hw info

hwinfo --gfxcard --short

Get Nvidia drivers:

apt search nvidia-driver
sudo apt-get install nvidia-driver-450-server

# confirm with:
nvidia-smi

2.1.4.1. Docker GPU Configuration

For Docker installation, see: Docker Installation

To get docker to use GPUs: [ref]

# install runtime:
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list |\
    sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update
sudo apt-get install nvidia-container-runtime

# restart docker service:
sudo systemctl stop docker
sudo systemctl start docker

2.1.4.2. Docker Image Build

The provided docker images (in dockerfiles dir) have minimal necessary builds for python3-based development using pytorch and either ssh development (with jetbrains tools), or jupyterlab.

Building:

# Base nvidia-gpu container with pytoch:
docker build -t nvidia-gpu-base -f nvidia-gpu-base .

# Jupyterlab build:
docker build -t nvidia-gpu-jupyter -f nvidia-gpu-jupyter .

# Remote SSH development build:
docker build -t nvidia-gpu-ssh -f nvidia-gpu-ssh .
2.1.4.2.1. NOTES

I’ve added two extra dockerfiles with -dev- in the middle, one for nvidia-gpu-dev-base and one for nvidia-gpu-dev-ssh. These files use the cudnn7 and development base images that should provide access to nvcc compiler and nvidia headers.

I’ve noticed my server machine has troubles auto-starting docker service on reboot, running:

sudo systemctl stop docker
sudo systemctl start docker

fixes the issue, however I will have to dig in further to identify the root cause

2.1.4.3. Docker Image Running

Two images can run, either jupyter, or ssh deveopment.

2.1.4.3.1. JUPYTER

To nvidia-gpu-enabled docker container and develop remotely, firstly, on the server-side, run the docker container and map any necessary data folders to the container:

# Emphasis on --gpus all
docker run -d --gpus all -p 8888:8888 -v /path/to/Data:/tmp/Data --name dev-gpu nvidia-gpu-jupyter:latest

This will run a docker instance with the Jupyter Lab running in the /tmp directory (at IP 0.0.0.0) and mapping docker’s 8888 port to the server’s 8888 port.

Once the container is running, to get the access token, on the server, run:

docker logs dev-gpu  # or the corresponding name of the container

This will print out the stdout of the container and will reveal Jupyter’s access token.

At this point, the Jupyterlab instance can be checked on the server by using wget localhost:8888, which will download an index.html file in the current directory.

To access the Jupterlab on the working machine (laptop, etc), two options are possible:

  1. Open browser and navigate to <server_ip>:8888

  2. Port forward the server’s 8888 port to your machine’s desired port with

    ssh -N -f -L localhost:8888:localhost:8888 server_username@server_ip
    
    then open browser and navigate to :code:`localhost:8888`
    

Note: shutting down jupyter from the web interface will close the container as well!

2.1.4.3.2. SSH-Remote Development (Jetbrains)

In server, run the container:

docker run -d --gpus all --cap-add sys_ptrace -p127.0.0.1:2222:22 -v /home/en/Data:/tmp/Data --name dev-gpu nvidia-gpu-ssh

On local machine, port forward a local port to the server’s 2222 port:

ssh -N -f -L localhost:3333:localhost:2222 server_username@server_ip

Now, in pycharm, a new ssh environment can be added on localhost port:3333 with credentials user:password.

2.1.4.4. Verify Cuda

To verify cuda is running, in jupyter block or pycharm console, run one or both of the following:

# access container command:
!nvidia-smi

# get through torch:
import torch
torch.cuda.device_count()
torch.cude.get_device_name(0)

2.1.4.5. Caveats

It seems like cannot Install Ubuntu 20.04 server without ethernet.

Trying fresh install, update, upgrade, nmcli install + config.

sudo apt install network-manager
nmcli d wifi list
nmcli d connect MY_SSID password MY_SSID_PASSWORD

nmcli connection edit MY_SSID
$ > set ipv4.addresses 192.168.1.22/24
$ > set ipv4.gateway 192.168.1.1
$ > set ipv4.dns 8.8.8.8,8.8.4.4
$ > save
$ > quit

reboot

in /etc/resolv.conf: need to ensure nameserver is set to router IP, or 8.8.8.8:

2.1.5. Tips & Tricks

Some systems have auto sleep enabled by default as a system service. This may not be desirable for systems that should stay awake for remote work.

It is possible to check /var/log/syslog to see if sleep.taget is triggered after period of inactivity.

To disable the automatic sleep and hybernate services, use:

# Inspect:
systemctly status sleep.target
# Disable:
sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target

ref

2.1.5.1. Linux systemctl

TODO: useage

2.1.5.2. Linux Startup Service

TODO: how to create linux startup service

  • create startup script/application

  • create a .service file, put it in etc/systemd/system/ [TODO add example from other service for how it looks / breaks down]

  • make the .service file, call your script/application

  • put your script/application in /etc/ directory (as part of install process, same with .service file)

  • in install process, run chmod +x on the script/application

  • in install process, run chmod +664 on the .service file

2.2. macOS

The macOS is built on a linux-like system, however, unlike common linux distros, it is missing a package manager (i.e. apt).

Currently testing with Big Sur

2.2.1. Install Brew

Thus, the first step of setting up a mac for development is the installation of a packagement tool, namely, homebrew, or brew. The installation can be found on the Brew website.

2.2.2. Install Packages

Once brew is installed, the following packages can be installed:

# Update Brew
brew update

# Install zsh --depracated, zsh native to mac
# brew install zsh

# Install macvim
brew install macvim

# Install cmake
brew install cmake

# Install python
brew install python

# Install pyenv (for python versions)
brew install pyenv

# Install virtualenv for python
brew install virtualenv

# Install MacTex for Latex Compilation:
brew install mactex

# Graphviz for doxygen
brew install graphviz

# Install doxygen:
brew install doxygen

# Install drawio:
brew install drawio

2.2.3. Supplementary Packages

The following packages are not available through Brew at the moment, and thus warrant their own section.

2.2.3.1. Docker

Docker is an OS-level virtualization platform for running applications. It is useful for development and running applications of different languages and ensures the underlying OS is configured for the application.

For more information about docker, see the page Docker.

To install Docker for Mac, following the instructions on the Docker page.

2.2.3.2. GTest

GTest is a C++ test-suite developed by Google.

The installation instructions for macOS can be found GTest Installation.

The installation requires updating ~/.zshrc file.

2.2.4. Tips & Tricks

The following tips and tricks are accumulated over time.

2.2.4.1. MDLS File Inspection

The mdls command can be used to retrieve meta data on any file, useful for scripting file renaming.

If the command returns (null) it means spotlight search needs to be rebuilt on the drive using sudo mdutil -E /Drive.

2.2.4.2. Remote Parallels

The standard Parallels installation does not provide command line tools and integrations, however, that does not mean that we cannot ssh into a linux image that is installed and running.

In my image configurations, I use the default network adaptor to expose the Parallels image to my network and allow it to dynamically receive an IP on my local network.

Then, I can simply install and use openssh to remote log into the virtual machine.

This is also useful for remote development methods as described in |xref_remote_development|.

2.3. Common Package Configurations

TODO: what is the purpose of this section? to provide common package configs between linux/mac; install instructions should have been handled in os-specific location.

git

2.3.1. Pyenv

Pyenv post installation configuration can be found HERE.

2.3.2. ZSH

The first part of configuring zsh is to install Oh My Zsh: Oh My ZSH.

Next step would be to set up the ~/.zshrc file. There are many ways to configure the file, the following is an example of what I have appended to mine, along with some descriptive information for my items.

Note, I am using robbyrussell theme.

# Some ZSH sauce to add 
# after oh-my-zsh file
# 
# Can be added with `wget <url> -O --> ~/.zshrc`
#
#

# Part of Pyenv install:
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"

# Aliasing auto python virtual environment creating using `mkvenv`
# that relies on pyenv installation of python3.8:

# determine if pyenv is installed
if ! command -v pyenv &> /dev/null
then
    echo "pyenv is not installed, reverting to system python"
        export USING_PYTHON=$(which python3)
    exit
    else
        export USING_PYTHON=$(pyenv which python3)
fi

alias activate="source .venv/bin/activate"
# MAC only:
if [[ "$(uname)" == "Darwin" ]]; then
    alias vim="mvim -v"
fi

# may want to remove the below:
# Part of GTEST install
export CPLUS_INCLUDE_PATH="/usr/local/include"
export LIBRARY_PATH="/usr/local/lib"



NOTE: the above is post-pyenv install! make sure not to duplicate…

This can be quickly added to your ~/.zshrc using the following command:

curl -o ~/.zshrc https://raw.githubusercontent.com/evgenyslab/labmanual/master/docs/zshsauce

2.3.3. VIM

Vim is a terminal editor that is very portable. My take on the configuration of vim can be found ADD XREF.

To setup EN’s vim [MAC]:

curl -o ~/.vimrc https://raw.githubusercontent.com/evgenyslab/labmanual/master/docs/source/codeSauces/vimrc

2.4. Miscellaneous

The following are general tips and tricks picked up over time that are inevitabily partitially forgotten.

2.4.1. Executibles

To make files executible, especially bash/shell scripts, change the file access control:

# Change access:
chmod +x myfile.sh
# Run the file:
./myfile.sh

2.4.2. SSH Port Forwarding

SSH port forwarding enables you to tunnel traffic on a specific port from one device to another:

[TODO]

This is very helpful in applications wherein a headless device needs to send information over a port to remote device with a UI, best example of this use case is running Jupyterlab on a remote/docker device and porting webui to local machine. See HERE for more information.

2.4.3. SCP (Copy)) Through SSH Tunnel

In a situation where a file needs to go from A <--> B <--> C, it is desired not to double copy through B.

To facilitate a simpler transaction, use SSH port tunneling to copy directly A <--> C.

For this example, A will be receiving end (user-end) and C will be remote source/destination.

  1. On A, create ssh tunnel through B to C using:

    ssh -L 12321:hostC:22 userB@hostB
    

    Where 12321 is a randomly selected available port, hostC is the IP address of C that is known to B, userB is the username at B, and finally, hostB is the IP of B.

    Note, this will open a remote connection in the current terminal to B.

  2. On A, then run the scp command with a port designation:

    scp -P userC@127.0.0.1:/path/to.file /local/destination
    

    Note, the source/destinations can be changed based on the required transfer direction.