NOTE:

For demonstration purposes only, not for use in production. 

Refer the following documentation for detailed information:

https://docs.oracle.com/en/operating-systems/olcne/


 

Step1:

Setup your environment/machines :

You need a minimum of 3 machines – Operator, Control and Worker nodes.

For Detailed Info:

https://blogbypuneeth.medium.com/1-setup-your-nodes-ocne-1-7-b911cbdf3202

 

a. Set the hostname (on each node):

hostnamectl set-hostname operator.xx.xxxxx.com
hostnamectl set-hostname control.xx.xxxxx.com
hostnamectl set-hostname worker1.xx.xxxxx.com
hostnamectl set-hostname worker2.xx.xxxxx.com

 

b. Edit /etc/hosts file and add map the IP and hostname / alias (On All Nodes):

10.xx.xx.xxx operator.xx.xxxxx.com operator
10.xx.xx.xxx control.xx.xxxxx.com control
10.xx.xx.xxx worker1.xx.xxxxx.com worker1
10.xx.xx.xxx worker2.xx.xxxxx.com worker2

 

Step 2:

 

a. Create “opc” user and assign elevated permissions to this user. (On All Nodes)

For Detailed Info:

https://blogbypuneeth.medium.com/create-a-new-opc-user-with-elevated-privileges-on-oracle-linux-8-x-50bdd45082f4

 

Commands:

useradd opc
passwd opc
usermod -aG wheel opc
visudo
%wheel ALL=(ALL) NOPASSWD: ALL

 

b. login with opc user and run the following commands: (On All Nodes)

On each node :

sudo mkdir -p /etc/systemd/system/crio.service.d
sudo chmod -R 755 /etc/systemd/system/crio.service.d
cd /etc/systemd/system/crio.service.d

 

sudo vi /etc/systemd/system/crio.service.d/proxy.conf

 

[Service]
Environment=”HTTP_PROXY=http://www-proxy-xxxx.xx.xxxxx.com:80″
Environment=”HTTPS_PROXY=http://www-proxy-xxxx.xx.xxxxx.com:80″
Environment=”NO_PROXY=localhost,127.0.0.1,.xx.xxxxx.com,10.xx.xx.xxx,10.xx.xx.xxx,10.xx.xx.xxx”

 

Step 3:

 

a. Login to Operator node with the user who will run olcnectl commands. 

In this case its opc, so login with opc user and create an ssh key pair. (on Operator node) 

ssh-keygen -t rsa

You should see the certs created as follows :

[opc@operator .ssh]$ pwd
/home/opc/.ssh
[opc@operator .ssh]$ ls 
authorized_keys  id_rsa  id_rsa.pub  known_hosts

 

For Detailed Info:

https://blogbypuneeth.medium.com/ocne-1-7-setup-ssh-key-based-authentication-9f4dc6f2168a

 

b. Now transfer the public key to all other nodes (On Operator node)

ssh-copy-id opc@operator
ssh-copy-id opc@control
ssh-copy-id opc@worker1
ssh-copy-id opc@worker2

After running the above commands successfully, you will see that the public key’s (id_rsa.pub) contents from Operator Node have been added to the copy of the user’s $HOME/.ssh/authorized_keys file on the Control and Worker Nodes.

cat $HOME/.ssh/authorized_keys (on All nodes)

Now login to each node from Operator node using ssh and check if passwordless authentication is successful. (Make sure you are logged in as “opc” user) 

ssh opc@control

 

ssh opc@worker

 

You should see the following file “authorized_keys” created on all nodes :

passwordless_ssh

 

Step 4:

 

Install and enable Chrony: (On All Nodes)

sudo dnf -y install chrony
sudo systemctl enable –now chronyd

 

Step 5:

 

Set proxy: (On All Nodes)

export https_proxy=http://www-proxy-xxxx.xx.xxxxx.xxx:80
export http_proxy=http://www-proxy-xxxx.xx.xxxxx.xxx:80
export no_proxy=“localhost,127.0.0.1,.xx.xxxxx.com,10.xx.xx.xxx,10.xx.xx.xxx,10.xx.xx.xxx”

 

Step 6: (On Operator node)

 

Setup Operator node: 

sudo dnf -y install oracle-olcne-release-el8

 

sudo dnf config-manager –enable ol8_olcne17 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream
sudo dnf config-manager –disable ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12
sudo dnf repolist –enabled | grep developer
sudo dnf config-manager –disable ol8_developer

 

sudo dnf install olcnectl

 

For Detailed Info:

https://blogbypuneeth.medium.com/3-setup-operator-node-ocne1-7-74aff80e2fea

 

Step 7: (On All Nodes)

Restart all nodes.

 

Step 8: (On Operator node)

olcnectl provision –api-server operator.xx.xxxxx.com –control-plane-nodes control.xx.xxxxx.com –worker-nodes worker1.xx.xxxxx.com,worker2.xx.xxxxx.com –environment-name myenvironment –name mycluster –http-proxy “www-proxy-xxxx.xx.xxxxx.com:80” –https-proxy “www-proxy-xxxx.xx.xxxxx.com:80” –no-proxy “localhost,127.0.0.1,.xx.xxxxx.com,10.xx.xx.xxx,10.xx.xx.xxx,10.xx.xx.xxx” –yes –debug

 

NOTE: 

Add the IP address of all nodes in k8s cluster at the end of no-proxy as shown above.

You can monitor the output of above commands and also tail the following logs to check the progress:

sudo tail -f /var/log/messages

 

Step 9: (On Operator node)

olcnectl module instances –api-server operator.us.oracle.com:8091 –environment-name myenvironment00 –update-config

 

olcnectl module report –environment-name myenvironment00 –name mycluster00 –children

 

Step 10: (On Control node)

After the provisioning completes successfully, ssh to Control node and run the following commands:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
echo ‘export KUBECONFIG=$HOME/.kube/config’ >> $HOME/.bashrc

 

Check if the kubernetes cluster was created successfully using the following command:

kubectl get nodes