0%

今天,把我之前在公司内部给同事们培训Openstack 介绍的note放到这里,是一年多之前基于当时的O版本的Ubuntu的部署,对于初学的同学也许有一些帮助,不过其实整个过程无非就是参照官方安装的guide而已。

Installation Guide

Latest version: installation-openstack-ubuntu-note.md hosted on Github.com/wey-gu

ref: https://docs.openstack.org/install-guide

Ubuntu was chosen as host OS.

“It’s a good way to learn by installing it manually for as many services as you could :-) .”

​ Wey Gu

Host networking

ref: https://docs.openstack.org/install-guide/environment-networking.html

ref: https://help.ubuntu.com/lts/serverguide/network-configuration.html

The example architectures assume use of the following networks:

  • Management on 10.0.0.0/24 with gateway 10.0.0.1

    This network requires a gateway to provide Internet access to all nodes for administrative purposes such as package installation, security updates, DNS, and NTP.

  • Provider on 203.0.113.0/24 with gateway 203.0.113.1

    This network requires a gateway to provide Internet access to instances in your OpenStack environment.

My network solution

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Net0:
Network name: VirtualBox host-only Ethernet Adapter
Purpose: administrator / management network
IP block: 10.20.0.0/24
DHCP: disable
Linux device: eth0

Net1:
Network name: VirtualBox host-only Ethernet Adapter#2
Purpose: Provider network
DHCP: disable
IP block: 172.16.0.0/24
Linux device: eth1

Net2:
Network name: VirtualBox host-only Ethernet Adapter#3
Purpose: Storage network
DHCP: disable
IP block: 192.168.99.0/24
Linux device: eth2

Net3:
Network name: VirtualBox Bridged or NAT // for accessing network or remote access purpose
Purpose: Internet
DHCP: enable
IP block: <depend on your network>
Linux device: eth3

Edit the /etc/network/interfaces file to contain the following:

Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.

1
2
3
4
5
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
Read more »

Key mapping

几个月前,我从 HHKB Lite2 for Mac 切换到了 HHKB Pro 2 Type-S,方向键需要map成 Ctrl + h/ j/ k/ l ,参考了@zjun 的 best practise ,用到了 Hammerspoon,这里记录一下配置。

1
2
~/.hammerspoon
❯ cat init.lua
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
local function pressFn(mods, key)
if key == nil then
key = mods
mods = {}
end

return function() hs.eventtap.keyStroke(mods, key, 1000) end
end

local function remap(mods, key, pressFn)
hs.hotkey.bind(mods, key, pressFn, nil, pressFn)
end

remap({'ctrl'}, 'h', pressFn('left'))
remap({'ctrl'}, 'j', pressFn('down'))
remap({'ctrl'}, 'k', pressFn('up'))
remap({'ctrl'}, 'l', pressFn('right'))

remap({'ctrl', 'shift'}, 'h', pressFn({'shift'}, 'left'))
remap({'ctrl', 'shift'}, 'j', pressFn({'shift'}, 'down'))
remap({'ctrl', 'shift'}, 'k', pressFn({'shift'}, 'up'))
remap({'ctrl', 'shift'}, 'l', pressFn({'shift'}, 'right'))

remap({'ctrl', 'cmd'}, 'h', pressFn({'cmd'}, 'left'))
remap({'ctrl', 'cmd'}, 'j', pressFn({'cmd'}, 'down'))
remap({'ctrl', 'cmd'}, 'k', pressFn({'cmd'}, 'up'))
remap({'ctrl', 'cmd'}, 'l', pressFn({'cmd'}, 'right'))

remap({'ctrl', 'alt'}, 'h', pressFn({'alt'}, 'left'))
remap({'ctrl', 'alt'}, 'j', pressFn({'alt'}, 'down'))
remap({'ctrl', 'alt'}, 'k', pressFn({'alt'}, 'up'))
remap({'ctrl', 'alt'}, 'l', pressFn({'alt'}, 'right'))

remap({'ctrl', 'shift', 'cmd'}, 'h', pressFn({'shift', 'cmd'}, 'left'))
remap({'ctrl', 'shift', 'cmd'}, 'j', pressFn({'shift', 'cmd'}, 'down'))
remap({'ctrl', 'shift', 'cmd'}, 'k', pressFn({'shift', 'cmd'}, 'up'))
remap({'ctrl', 'shift', 'cmd'}, 'l', pressFn({'shift', 'cmd'}, 'right'))

remap({'ctrl', 'shift', 'alt'}, 'h', pressFn({'shift', 'alt'}, 'left'))
remap({'ctrl', 'shift', 'alt'}, 'j', pressFn({'shift', 'alt'}, 'down'))
remap({'ctrl', 'shift', 'alt'}, 'k', pressFn({'shift', 'alt'}, 'up'))
remap({'ctrl', 'shift', 'alt'}, 'l', pressFn({'shift', 'alt'}, 'right'))

remap({'ctrl', 'cmd', 'alt'}, 'h', pressFn({'cmd', 'alt'}, 'left'))
remap({'ctrl', 'cmd', 'alt'}, 'j', pressFn({'cmd', 'alt'}, 'down'))
remap({'ctrl', 'cmd', 'alt'}, 'k', pressFn({'cmd', 'alt'}, 'up'))
remap({'ctrl', 'cmd', 'alt'}, 'l', pressFn({'cmd', 'alt'}, 'right'))

remap({'ctrl', 'cmd', 'alt', 'shift'}, 'h', pressFn({'cmd', 'alt', 'shift'}, 'left'))
remap({'ctrl', 'cmd', 'alt', 'shift'}, 'j', pressFn({'cmd', 'alt', 'shift'}, 'down'))
remap({'ctrl', 'cmd', 'alt', 'shift'}, 'k', pressFn({'cmd', 'alt', 'shift'}, 'up'))
remap({'ctrl', 'cmd', 'alt', 'shift'}, 'l', pressFn({'cmd', 'alt', 'shift'}, 'right'))

Gradient Descent for linear regression task

Data pre processing

  • Load data from csv file to a list
1
2
3
4
5
trainDataList = list()
with open("train.csv", newline="") as csvfile:
trainData = csv.reader(csvfile, delimiter=",",quotechar="|")
for line in trainData:
trainDataList.append(line[3:])
  • make all train data in one list, to enable iterating per hour of the list, thus, we have (24 * days - 10 + 1) training data.

    trainDataIteratedPerHour: list()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# n lines to be one day

def mod_18(n):
return n % 18

# transpose a matrix(2D list)
def T_list(l):
return np.array(l).T.tolist()


i = 0
trainDataIteratedPerHour = []
listToAppend = []

for item in trainDataList[1:]:
if not mod_18(i):
# from mod 10: 0
listToAppend = [item]

elif mod_18(i) == 10:
# rain, consider "NR" = 0
listToAppend.append(["0" if x == 'NR' else x for x in item])

elif mod_18(i) == 17:
# all 18 lines collected, built a set of data to extend
listToAppend.append(item)
trainDataIteratedPerHour.extend(T_list(listToAppend))
else:
# append other data
listToAppend.append(item)
i = i + 1

i = 0
  • build x_data and y_data
1
2
3
4
5
6
7
8
9
10
11
12
13
14
"""
build x_data and y_data
"""

x_data = [] # 18*9 dimensions
y_data = [] # scalar, pm2.5 of next hour for x_data

for n in range(len(trainDataIteratedPerHour) - 10):
x_data.extend(np.array(trainDataIteratedPerHour[n : n + 9]).reshape(1,162).tolist())
y_data.append(trainDataIteratedPerHour[n + 10][9])

x_data = [[float(j) for j in i] for i in x_data]
y_data = [float(i) for i in y_data]

  • draw plot (to feel the range of the data)
1
2
3
4
5
6
7
8
9
10
11
12
13
# draw plot of pm2.5 range?

import matplotlib
import matplotlib.pyplot as plt

x = range(len(y_data))
y = np.array(y_data)
# refer to https://matplotlib.org/2.2.2/gallery/lines_bars_and_markers/simple_plot.html

fig, ax = plt.subplots()
ax.plot(x, y)

# plt.show()

Loss function

1
2
3
4
5
6
7
8
9
10
11
12
def lossFunction(w, x_data, y_data):
w = np.array(w)
x_data = np.array(x_data)
result = 0.

for i in range(len(y_data)):
result += (y_data[i] - sum(w * x_data[i]))**2
return result

# w = [0.01] * 162
# L = lossFunction(w,x_data,y_data)

Iterations for grident descent

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Iterations

def iterationRun(lr,iteration,x_data,y_data):
# initial data
w = [0.01] * 162
L_history = [lossFunction(w,x_data,y_data)]

w = np.array(w)
x_data = np.array(x_data)

for iterator in range(iteration):
# initialize w_grad
w_grad = [0.0] * 162
# sum of all training data set
for n in range(len(y_data)):
# per feature
for i in range(162):
w_grad[i] = w_grad[i] - 2.0 * x_data[n][i] * ( sum(w * x_data[n]) - y_data[n] )

# update w
for i in range(162):
w[i] = w[i] + lr * w_grad[i]

# store Loss Function hisotry for plotting
L_history.append(lossFunction(w,x_data,y_data))
print (str(iterator) + " : " + str(datetime.datetime.now().time()) + " L:" + str(L_history[-1]))
return L_history

Run it for 10 iterations

1
2
3
lr = 0.0000000000001 # learning rate
iteration = 10
iterationRun(lr,iteration,x_data,y_data)

Tunning

Find good initial learning rate

start with lr = 0.0000000000001

1
2
3
4
5
6
7
8
9
10
In [19]: lr = 0.0000000000001 # learning rate
...: iteration = 10
...: iterationRun(lr,iteration,x_data,y_data)
...:
0 : 13:21:22.227145 L:5897161.869645979
1 : 13:21:53.988814 L:5890818.193569232
2 : 13:22:25.639150 L:5884483.073933817
3 : 13:22:57.211950 L:5878156.49918669
4 : 13:23:28.873442 L:5871838.457790465
5 : 13:24:01.059230 L:5865528.93822318

hmmm… try greater lr

lr = 0.00000000001

The loss function is descenting faster, like around 60000 per epoch.

Read more »