You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 21
Next »
2- Install the missing packages
3- run the contrail installation playbook
1- On AWS create an VM: CentOS 7.4—Linux Kernel Version 3.10.0-693.21.1
https://www.juniper.net/documentation/en_US/contrail5.0/topics/reference/supported-platforms-50-vnc.html
With Terraform:
#AWS access and securty key
# 1- create an VPC
# 1a- create an Internet Gateway
# 1b- create an Route in the RT
# 1c- create Security Groups
# 2- create subnet
# 3- Create an Key pair to access the VM
# 4- create an instance
# define variables and point to terraform.tfvars
variable "access_key" {}
variable "secret_key" {}
variable "region" { default = "us-west-2" }
variable pri_sub1 { default = "10.1.1.0/24" }
#AWS access and secret key to access AWS
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
# 1- create an VPC in aws
resource "aws_vpc" "vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags {
Name = "ixVPC"
}
}
# 1a- create an Internet Gateway
resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.vpc.id}"
tags {
Name = "main_gw"
}
}
# 1b- create an Route in the RT
resource "aws_route" "internet_access" {
route_table_id = "${aws_vpc.vpc.main_route_table_id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gw.id}"
}
# 1c- create Security Groups
resource "aws_security_group" "allow_ssh" {
name = "allow_inbound_SSH"
description = "Allow inbound SSH traffic from any IP@"
vpc_id = "${aws_vpc.vpc.id}"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
#prefix_list_ids = ["pl-12c4e678"]
}
tags {
Name = "Allow SSH"
}
}
# 2- create subnet
resource "aws_subnet" "private" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.pri_sub1}"
tags {
#Name = "${var.name}-private"
Name = "ixVPC-private"
}
}
#3- Create an Key pair to access the VM
#resource "aws_key_pair" "admin_key" {
# key_name = "admin_key"
# public_key = "ssh-rsa AAAAB3[…]"
#}
# 4- create an instance
resource "aws_instance" "vminstance1" {
# AWS Centos 7 AMI
ami = "ami-5490ed2c"
instance_type = "t2.micro"
# key pair in us-west-2 or Oregon
key_name = "TerraformKeyPair"
# to log in: centos / terraformkeypair
#
subnet_id = "${aws_subnet.private.id}"
#security_groups= ["TerraformsSecurityGroup"]
security_groups = ["${aws_security_group.allow_ssh.id}"]
associate_public_ip_address = true
tags {
Name = "VMinstance1"
}
}
cd <to the correct directory where terraformfilelive.tf >
terraform plan
terraform apply ( type "yes" to confirm )
terraform destroy ( at the end and type "yes" to confirm)
Run Ansible from laptop:
1- copy the private key ( created on AWS ) to the local client / laptop
2- change the mod to 0600
3- ssh-agent bash + add the private key
4- ssh centos@AWS_Server_IP@
ssh-keygen # will create a pair of private key ( id_rsa) and a public key ( id_rsa.pub )
chmod 0600 .ssh/id_rsa # safety reason!!!
#test: ssh localhost
#ssh-copy-id localhost ( or IP@) # copy the public key to ".ssh/authorized_keys"
In the case the installation is done "locally": log into the VM in AWS, and then do the installation from here: the VM AWS
--------------------------------------------------------------------------------------------------------------------------
1- On the Local PC: Terraform >>>> create the VM on AWS ( with the key )
2- # On the Local PC: Ansible push the configuration steps to the VM on AWS >>>> Manually log into the AWS' VM and run the commands
1- Log as "centos" user ( same as the VM on AWS, just make it easier )
2- upload the key on the local PC ( same key as on the VM AWS, make it easier to log into it )
3- # ssh-keygen ( .ssh/id_rsa is the default, passphrase cont... ) # not needed becuase I will use the same key as AWS
ssh-agent bash
ssh-add ~/.ssh/TerraformKeyPair.pem
# ssh-add -l or ssh-add -L #just to check
more .ssh/authorized_keys >>> can see the terraformapukey ( to log into the VM remotely : home to AWS DC)
vi .ssh/authorized_keys # add the key from "ssh-add -L "
ssh localhost # just to check that the key has been copy and works
exit
#ssh-copy-id <10.0.1.104>
#Centos
#sudo yum install -y ansible-2.4.2.0
#sudo yum install git -y
#Ubuntu
sudo pip install ansible==2.4.2.0
sudo apt-get install git -y
git clone http://github.com/Juniper/contrail-ansible-deployer
cd contrail-ansible-deployer
ansible-playbook -e orchestrator=kubernetes -i inventory/ playbooks/provision_instances.yml
ansible-playbook -e orchestrator=kubernetes -i inventory/ playbooks/configure_instances.yml
ansible-playbook -e orchestrator=kubernetes -i inventory/ playbooks/install_contrail.yml
ansible-playbook -e orchestrator=kubernetes -i inventory/ playbooks/install_k8s.yml
-------------------------------------------------------------------------------------------
#ansible-playbook -i inventory/ -e orchestrator=kubernetes -e '{"instances":{"bms1":{"ip":"10.0.1.104","provider":"bms"}}}' playbooks/configure_instances.yml
#ansible-playbook -i inventory/ -e orchestrator=kubernetes -e '{"instances":{"bms1":{"ip":"10.0.1.104","provider":"bms"}}}' playbooks/install_contrail.yml
#ansible-playbook -i inventory/ -e orchestrator=kubernetes -e '{"instances":{"bms1":{"ip":"192.168.1.100","provider":"bms"}}}' playbooks/install_k8s.yml
ansible-playbook -i inventory/ -e orchestrator=kubernetes -e '{"instances":{"bms1":{"ip":"localhost","provider":"bms"}}}' playbooks/configure_instances.yml
ansible-playbook -i inventory/ -e orchestrator=kubernetes -e '{"instances":{"bms1":{"ip":"localhost","provider":"bms"}}}' playbooks/install_contrail.yml
ansible-playbook -i inventory/ -e orchestrator=kubernetes -e '{"instances":{"bms1":{"ip":"localhost","provider":"bms"}}}' playbooks/install_k8s.yml
Issue:
AnsibleUndefinedVariable: 'cassandra_seeds' is undefined"}