http://redmine.nehome.net/redmine/projects/nosql/wiki/Cassandra_-_Replica-Strategy_and_Snitch-Strategy_and_Partitioner-Strategy



Replica-Strategy

  • Keyspace 생성시 설정한다.
    [default@unknown] CREATE KEYSPACE score with placement_strategy='org.apache.cassandra.locator.NetworkTopologyStrategy' and strategy_options={0:3};
    
  • Cluster내에서 Replica들을 어떻게 분산 시킬지를 결정
  • Type에는 2가지 지원
    • SimpleStrategy
    • NetworkTopologyStrategy

SimpleStrategy

  • Row키에 대한 MD5 Hash결과에 의해 Data저장 노드가 결정되면, 무조건 Ring에서 시계방향으로 바로 다음(Next) 노드들에게 Replica를 배치 시키는 방식.
  • 뒤에 나올 내용인 Snitch-Strategy의 설정은 모두 무시된다. 단순히 Ring의 시계방향만 고려됨.

NetworkTopologyStrategy

  • 먼저 Cassandra에서 말하는 "DataCenter"란?
    Replication-Group(복제 그룹)을 의미 하며, 복제를 목적으로 클러스트 내에서 묶여진 논리적인 그룹이다.주의할 것은 물리적인 DataCenter을 의미하는 것이 아님.
    
  • DataCenter/Rack에 걸쳐 정확하게 Replica를 배치 하기위해 Snitch정보에 의존적이다.
  • SimpleSnitch 모드가 아닌 Snitch-Strategy에 의해 정의된 Replication-Group(or Each DataCenter)정보를 기준으로 Replica를 적절히 배치.
  • Replica 배치는 각각의 DataCenter별로 독립적으로 설정한다.(비대칭 Replica 수 운영 가능)
    DC1에는 3개, DC2에는 2개 이런식으로...
    
  • 첫 Replica들은 각각의 DataCenter들에 우선적으로 하나씩 결정(배치)된다.(by Partitioner)
  • 나머지 Replica들은 각각의 DataCenter들에 지정된 갯수만큼 배치되는데, 이때 Snitch정보인 Replication-Group정보(Rack)를 참조해서, 각 DataCenter의 Ring정보에서 시계방향으로 돌면서, 이전 Replica와 비교해 동일 Replication-Group이 아닌 Rack이 나올때까지 검색해서 배치한다.
  • 단, 분리 배치 할 노드가 부족할 경우에는 동일 Rack이라도 배치하게 된다.
  • 얼마나 많은 Replica를 사용할지를 결정할 때 고려해야 할 것
    • DataCenter간의 지연시간을 투자하지 않고, 로컬의 데이타(Local-Data)를 읽을 수 있게 한다.
    • 여러가지 장애 상황에 대한 대처가 가능해야 한다.
  • Multi-DataCenter구조하에서 아래와 같은 2가지 방법이 추천된다.
    • 각각에 DataCenter에 2개의 Replica를 두는 것.
      • Replication-Group당 하나의 노드장애를 커버 가능하고, 장애상황하에서 "Consistency-Level ONE"으로 Local-Data 읽기가 가능.
    • 각각의 DataCenter에 3개의 Replica를 두는 것.
      • "Strong Consistency Level Local_QUORUM"모드에서, Replication-Group당 하나의 노드장애를 커버 가능하거나, "Consistency-Level ONE"모드에서는 Replication-Group당 다중(2대) 노드의 장애가 커버 가능.
  • Snitch는 복제 범위 내에서 효율적으로 노드간 통신을 하며, 클라이언트 응용프로그램과 카산드라 사이의 요청처리에는 여향을 미치지 않는다.
  • Cluster의 모든 노드는 동일한 Snitch를 사용해야 한다.(cassandra.yaml)

Snitch-Strategy

  • Snitch ??
    노드가 전체 네트워크 토플로지에서 그룹화 되는 방법을 정의 하는 역할
    

SimpleSnitch

  • cassandra.yaml 에서 설정.
  • DataCenter/Rack에 대한 구분이 없다.
  • Partitioner에 의해 저장될 노드가 정해지면, Replica에 대해서는 무조건 시계방향으로 나머지 Replica개수 만큼 Next(다음) 노드들에게 순차적으로 배치 시킨다.

RackInferringSnitch

  • 클러스트 내에서, 노드의 IP정보를 기반으로 그룹화 한다. (옥텟 기반)
  • IP가 10.DDD.RRR.NNN 이라고 한다면,
    • DDD값은, DataCenter를 의미, 동일하면 같은 DataCenter로 간주.
    • RRR값은, Rack을 의미, 동일하면 같은 Rack으로 간주.
    • NNN값은, 노드를 구분.
  • DDD와, RRR 값이 중요하다.
  • NetworkTopologyStrategy와 사용하면 유리.

PropertyFileSnitch

  • 클러스트 내에서, 노드들의 그룹화를 특정 속성파일(cassandra-topology.properties)을 통해 정의 한다.
  • 모든 노드정보를 속성파일에 기술해야 한다. (관리의 번잡)
  • 모든 노드는 동일 속성파일을 유지 해야 한다.
  • 관리 정보는 RackInferringSnitch와 유사
    • 노드별 DataCenter/Rack 정보 정의

Partitioner

  • cassandra.yaml 에서 설정.
  • Data(Replica가 아님)가 저장될 노드를 결정하는 방식 정의.

RandomPartitioner (RP)

  • Data의 Row-Key에 대해 MD5 Hash값을 기준으로, 저장할 노드 결정.
  • Data가 어느 노드에 저장될지 예측 불가.
  • 저장과 동시에 정렬 불가.
    • Full-Text Search 불가.
    • Range Scan 불가.
    • 부하 분산 유리.
    • 확장성/Data재분배 유리(가능)

ByteOrderedPartitioner (BOP)

  • Data의 Row-Key 자체가 노드가 된다.
  • Row-Key값으로 노드가 결정되므로, 저장과 동시에 정렬이 된다.
    • Full-Text Search 가능.
    • Range Scan 가능.
    • 부하 분산 불균형
    • 확장성/Data재분배 불리(어려움)

Posted by 사랑줍는거지
,

Xen - How I/O-Ring Works (between Domain-0 and Domain-U)

Xen에서 FrontEnd 드라이버와 BackEnd 드라이버 사이에 I/O 작업이 어떻게 이루어지는지 살펴 본다.


사전 정보

  • Native-Device-Driver(Physical-Driver)는 Domain-0에 있다.
  • 아래 그림과 같은 I/O Ring은 Xen-Hypervisor레벨에 하나가 존재 한다. 이 Ring이 모든 I/O에 대한 중계 역할.

Xen's I/O Ring 구성도

I/O 요청 Flow

  1. Domain-U의 App이 I/O 요청을 발생시키면 FrontEnd-Driver가 이를 받아, Xen-Hypervisor에 있는 I/O Ring에 요청 이벤트로 등록.
  2. 이벤트 등록이 완료되면, I/O를 발생시킨 vCPU는 블록상태로 전이 및 응답 대기. (I/O 부하 상태)
  3. Domain-0의 BackEnd-Driver는 Xen-Hypervisor의 I/O Ring에 등록된 I/O요청 이벤트를 읽어, Native-Device-Driver를 통해 실제 물리적 Device로부터 I/O를 수행.

I/O 응답 Flow

  1. Domain-0는 물리적 Device로부터 I/O를 수행한 결과를, 다시 Xen-Hypervisor의 I/O Ring에 응답 이벤트로 등록.
  2. 이벤트 등록이 완료되면, Xen-Hypervisor은 블록상태로 대기중이던 vCPU를 최고 우선순위로 다시 활성화 시킨다.
  3. 블록상태에서 벗어난 vCPU는 FrontEnd-Driver을 통해, I/O 응답 결과를 받아 이후 작업을 처리.

I/O Ring의 권한 중재자 Grant-Table

  • Domain-0, Domain-U, Xen-Hypervisor 모두가 공유하는데, 이 때 제어 권한 처리는 Grant-Table에서 관리 된다.

Posted by 사랑줍는거지
,





Migration

Introduction

KVM currently supports savevm/loadvm and offline or live migration Migration commands are given when in qemu-monitor (Alt-Ctrl-2). Upon successful completion, the migrated VM continues to run on the destination host.

Note

You can migrate a guest between an AMD host to an Intel host and back. Naturally, a 64-bit guest can only be migrated to a 64-bit host, but a 32-bit guest can be migrated at will.

There are some older Intel processors which don't support NX (or XD), which may cause problems in a cluster which includes NX-supporting hosts. To disable NX for a given guest, start it with such a parameter: -cpu qemu64,-nx

Requirements

  • The VM image is accessible on both source and destination hosts (located on a shared storage, e.g. using nfs).
  • It is recommended an images-directory would be found on the same path on both hosts (for migrations of a copy-on-write image -- an image created on top of a base-image using "qemu-image create -b ...")
  • The src and dst hosts must be on the same subnet (keeping guest's network when tap is used).
  • Do not use -snapshot qemu command line option.
  • For tcp: migration protocol
  • the guest on the destination must be started the same way it was started on the source.

highlights / merits

  • Almost unnoticeable guest down time
  • Guest is not involved (unique to KVM Live Migration [#1 1])
  • Capability to tunnel VM state through an external program (unique to KVM Live Migration [#1 1])
  • ssh/gzip/bzip2/gpg/your own
  • Upon success guest continues to run on destination host, upon failure guest continues to run on source host (with one exception)
  • Short and Simple
  • Easy to enhance
  • Hardware independence (almost).
  • Support for migration of stopped (paused) VMs.
  • Open

Anchor(1) 1 These features are unique to KVM Live Migration as far as I know. If you know of other hypervisor that support any of them please update this page or let me (Uri) know.

User Interface

The user interface is through the qemu monitor (alt-ctrl-2 on the SDL window)

Management

migrate [-d] <URI>
migrate_cancel    

The '-d' will let you query the status of the migration.

With no '-d' the monitor prompt returns when the migration completes. URI can be one of 'exec:<command>' or tcp:<ip:port>

Status

info migrate 

Migration Parameters

migrate_set_speed <speed>   set bandwidth control parameters (max transfer rate per second)

Example / HOWTO

A is the source host, B is the destination host:

TCP example:

1. Start the VM on B with the exact same parameters as the VM on A, in migration-listen mode:

B: <qemu-command-line> -incoming tcp:0:4444 (or other PORT))

2. Start the migration (always on the source host):

A: migrate -d tcp:B:4444 (or other PORT)

3. Check the status (on A only):

A: (qemu) info migrate                   

Offline example:

1. unlimit bandwidth used for migration:

A: migrate_set_speed 1g                

2. stop the guest:

A: stop                                

3. continue with either TCP or exec migration as described above.

Problems / Todo

  • TSC offset on the new host must be set in such a way that the guest sees a monotonically increasing TSC, otherwise the guest may hang indefinitely after migration.
  • usbdevice tablet complains after migration.
  • handle migration completion better (especially when network problems occur).
  • More informative status.
  • Migration does not work while CPU real-mode/protected mode are still changing.

savevm/loadvm to an external state file (using pseudo-migration)

  • To be supported directly by Migration Protocols, but until then...
  • Save VM state into a compressed file
    • Save
stop                                                               
migrate_set_speed 4095m                                            
migrate "exec:gzip -c > STATEFILE.gz"                              
    • Load
gzip -c -d STATEFILE.gz | <qemu-command-line> -incoming "exec: cat"   or
<qemu-command-line> -incoming "exec: gzip -c -d STATEFILE.gz"
  • Save VM State into an encrypted file (assumes KEY has already been generated)
    • Save
stop                                                                          
migrate_set_speed 4095m
migrate "exec:gpg -q -e -r KEY -o STATFILE.gpg"
  • Load VM state from an encrypted file
gpg -q -d -r KEY STATEFILE.gpg | <qemu-command-line> -incoming "exec:cat"

more exec: options

  • Send encrypted VM state from host A to host B (Note: ssh is probably better, this is just for show)
    • on host A
migrate "exec:gpg -q -e -r KEY | nc B 4444"
    • on host B
nc -l 4444 | gpg -q -d -r KEY | <qemu-command-line> -incoming "exec:cat"

Algorithm (the short version)

1. Setup

  • Start guest on destination, connect, enable dirty page logging and more

2. Transfer Memory

  • Guest continues to run
  • Bandwidth limitation (controlled by the user)
  • First transfer the whole memory
  • Iteratively transfer all dirty pages (pages that were written to by the guest).

3. Stop the guest

  • And sync VM image(s) (guest's hard drives).

4. Transfer State

  • As fast as possible (no bandwidth limitation)
  • All VM devices' state and dirty pages yet to be transferred

5. Continue the guest

  • On destination upon success
    • Broadcast "I'm over here" Ethernet packet to announce new location of NIC(s).
  • On source upon failure (with one exception).


Instructions for kvm-13 and below: MigrationQemu0.8.2.


Posted by 사랑줍는거지
,


Virtualization - Full vs Para (Appendix Pass-Through)(20120.pdf










Posted by 사랑줍는거지
,

[PDF] - MIgration Xen to KVM

Cloud 2012. 5. 12. 22:19

Xen에서 KVM으로 VM 마이그레이션 처리에 관한 문서... 


Migration_Xen_to_KVM (Summit_2010_Xen_KVM).pdf


Posted by 사랑줍는거지
,

후기랄것도 없다...


뭐가 이렇게 간단해??!!!!!!


2.x대의 그 애매모호성이 대폭 없어진 느낌... (물론 Zone이니, Pod니, Cluster, Host 등의 개념을 처음 접하면 역시나 난해함이 있을수도 있겠지만...)


그러나, 결론은 2.x에 비하자면 매우 만족!!!!!!!!!! 장비만 너끈하다면, 1시간이면 뚝딱 IaaS 인프라 구축이 완료될듯...


한마디로 요약하자면, Windows에서 프로그램 설치 하듯... 위자드(설치 마법사) 수준으로 쉬움~~ 강추~~!!


* 설치 메뉴얼은 따로 작성안한다... 왜? CloudStack공식 가이드 문서인 Quick-Installation 문서가 너무 잘 되어 있기때문에~ 굳이~ 사족달 필요 없다....


링크 : Quick Install Guide (English)

http://download.cloud.com/releases/3.0.0/CloudStack3.0QuickInstallGuide.pdf


Posted by 사랑줍는거지
,

‎"Why Is Citrix Moving CloudStack to the Apache Foundation?"
"Why Align CloudStack with the AWS APIs?"



원문 : http://gevaperry.typepad.com/main/2012/04/is-citrix-opening-a-three-front-war-with-cloudstack.html





In the Game of Clouds, You Win Or You Die: CloudStack

Citrix is making a big announcement today. It has two parts. First, it's moving its CloudStack framework, which was owned by Citrix and distributed at CloudStack.org through a GPL license, to the Apache Foundation. Second, it is aligning CloudStack withAmazon Web Services' architecture and APIs.

This is big news that's sure to ruffle some feathers in the cloud computing space. So the questions I have are:

  • Why are they doing this?
  • What are the implications?

Why Is Citrix Moving CloudStack to the Apache Foundation?

The move appears straightforward. Citrix acquired CloudStack through its $200 millionacquisition of Cloud.com in July 2011. From what I'm hearing, it has had good success with enterprise and provider adoption of the product, but it was far from being accepted as an industry de facto standard, which is what, I assume, they had hoped for. So it would make sense for them to move it to a respected open source foundation like Apache.

But wait. There was already a OpenStack vying for that de facto standard open source platform status, and Citrix announced its support for it two months before the Cloud.comacquisition. The company said it would continue to support both platforms after the acquisition, so what happened?

Citrix claims that the OpenStack foundation wasn't run well: it was dominated by Rackspace and had a "pay-to-play" model. The result was that the APIs were poorly designed and the product lacked stability and maturity (a claim I have heard from others). At the same time, while OpenStack is getting support from the likes of Cisco and HP, the community is somewhat fragmenting, with multiple distributions and extensions from the likes of startups Cloudscaling and Piston Cloud. This started creating a problem for enterprise customers and providers who were getting confused and having a bad experience with OpenStack.

In the meantime, Citrix is feeling competitive pressure from the company it views as its primary rival for actual customers (as opposed to winning the hearts and minds of the "community"): VMWare.

It simply couldn't wait and had to go on the attack with a bold move. This is it -- and it's a pretty good one.

Why Align CloudStack with the AWS APIs?

The final piece in making CloudStack the de facto standard cloud platform is the API. By aligning with the Amazon APIs, CloudStack gains a hghly-adopted, proven API with a massive ecosystem of integrated tools and services around it. They tell me they will have 100% compatibility by the end of this year.

But the catch is this comes on the heels of Eucalyptus's announcement that it is aligning with the AWS APIs. Eucalyptus, just a reminder, is another open source cloud platform that has been vying for leadership among enterprise customers.

Game of Clouds

With this move complete, Citrix has a production-proven stable product, which is now an open source platform managed by the widely-respected Apache Foundation, using a popular API with a massive ecosystem around it. Pretty clever, I think.

On the other hand, CloudStack is fighting on three fronts now: its traditional arch-enemy VMWare, the OpenStack camp with the dozens of vendors -- large and small -- behind it, and Eucalyptus. A regular game of thrones. And to paraphrase Game of Thrones: In the Game of Clouds, you either win or you die.

'Cloud' 카테고리의 다른 글

[PDF] - MIgration Xen to KVM  (0) 2012.05.12
CloudStack 3.0 설치 후기....  (0) 2012.05.06
CloudStack + SDN  (0) 2012.05.01
Install OpenNebula-2.2-1 with Xen  (0) 2012.03.18
Install - CloudStack CE 2.1.x ManagementComputing Node  (0) 2012.03.18
Posted by 사랑줍는거지
,

CloudStack + SDN

Cloud 2012. 5. 1. 16:43


  • 특징
    • VLAN의 제한을 벗어남 (예: VLAN 개수가 4k 이상 운영 불가 등등)
    • GRE 터널링을 활용해서, L3위에서 L2에 대한 Overlay 네트워크 실현 (IDC간에서도 Network 통합)
      • 기존의 임기응변 격이었던 VPN을 통한 L2 Overlay 네트워크 흉내내던 것에 대한 대안...
    • VLAN개수 제한 대신 테넌트키(Tanant-Key)라는 것을 통해 계정별 트래픽 격리
      • Tanant-Key가 기존의 VLAN에서 VLAN-Tag(ID) 속성에 매칭되는 듯...
  • 걱정거리...
    • GRE 터널링에서 Tanant-Key활용은 분명 기존의 VPN에 비해 복잡성을 많이 제거 했으나, 터널링이라는 것에 의한 속도 문제는....
    • SDN과는 별개의 이야기지만, RouteVM이라는 병목 구간에 대한 개선점은 잘 보이지 않음....


이상, 간략히 훌터본 바로는 이상과 같다... 자세한 것은 실제로 테스트 정도는 해봐야 윤곽이 나올듯...

Posted by 사랑줍는거지
,

Install OpenNebula-2.2-1 with Xen

Reference
http://opennebula.org
http://downloads.dsa-research.org/opennebula

Environments

Tested environments info

Test System / Network

- On VMware ESXi
- One vSwitch
- Two VMs
- VM IP Range : 192.168.100.151~160 (10EA)
- By root privileges
- Run "yum -y update" on all nodes
- Template VM Image : CentOS 5.6 x86_64

Test Nodes

- Management Node 
    - IP : 192.168.100.121 (one-01)
    - OS : CentOS 5.6 x86_64 (2.6.18-238.9.1.el5)
    - 1Core / 384RAM
- Xen Node IP : 192.168.100.122
    - IP : 192.168.100.122 (one-02)
    - OS : CentOS 5.6 x86_64 (2.6.18-238.19.1.el5xen)
    - 2Core / 1024RAM

On whole nodes

Add to /etc/hosts

192.168.100.121 one-01
192.168.100.122 one-02

On Management Node

Install OpenNebula & Setup Env.

# cd /usr/local/src

Download opennebula-2.2-1.x86_64.rpm to /usr/local/src/ (from http://downloads.dsa-research.org/opennebula)

# wget  ftp://ftp.pbone.net/mirror/centos.karan.org/el5/extras/testing/SRPMS/xmlrpc-c-1.06.18-1.el5.kb.src.rpm

# ls -al /usr/local/src/
-rw-r--r--  1 root root     717867 Jul 22 13:53 opennebula-2.2-1.x86_64.rpm
-rw-r--r--  1 root root     708799 Jul 23 21:49 xmlrpc-c-1.06.18-1.el5.kb.src.rpm

# yum install ruby ruby-devel ruby-docs ruby-ri ruby-irb ruby-rdoc

# rpmbuild --rebuild xmlrpc-c-1.06.18-1.el5.kb.src.rpm

# rpm -Uvh /usr/src/redhat/RPMS/x86_64/xmlrpc-c-*

# rpm -Uvh opennebula-2.2-1.x86_64.rpm

Setup OpenNebula Admin User & Env

# echo "export ONE_XMLRPC=http://localhost:2633/RPC2" >> /etc/profile

# echo "export ONE_AUTH=/home/oneadmin/one_auth" >> /etc/profile

# useradd oneadmin -d /home/oneadmin

# echo "oneadmin:1234" > /home/oneadmin/one_auth

SSH Key Create & Setup sudo

# mkdir /home/oneadmin/.ssh

# ssh-keygen -t rsa -N '' -f /home/oneadmin/.ssh/id_rsa

# cp -a /home/oneadmin/.ssh/id_rsa.pub /home/oneadmin/.ssh/authorized_keys

# chmod 700 /home/oneadmin/.ssh

# chown -R oneadmin:oneadmin /home/oneadmin/.ssh

# sed -i 's/^Defaults    requiretty/# /g' /etc/sudoers

Add to /etc/ssh/ssh_config

Host *
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null

On Xen Node

Install Xen

# yum install ruby ebtables

# yum groupinstall Vritualization

# useradd oneadmin -d /home/oneadmin

# reboot

# virsh net-destroy default

# virsh net-undefine default

SSH Key Create & Setup sudo

# mkdir /home/oneadmin/.ssh

# scp root@one-01:/home/oneadmin/.ssh/* /home/oneadmin/.ssh/

# chonw -R oneadmin:oneadmin /home/oneadmin/.ssh

# sed -i 's/^Defaults    requiretty/# /g' /etc/sudoers 

Add to /etc/ssh/ssh_config

Host *
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null

on Xen Node

Create Xen Template Image

# virt-install --name centos5_64 --ram 384 --paravirt --location=http://ftp.daum.net/centos/5/os/x86_64/ --file=/home/oneadmin/centos5_64.xen.img -s 1.5

** Install Minimal Package

On VM Template Image - Setup VM Env. (After Installation & Reboot)

# echo "/var/lib/one/one-boot-init.sh" >> /etc/rc.local

# mkdir /var/lib/one/
# vi /var/lib/one/one-boot-init.sh
#!/bin/bash

DEV_CD="/dev/xvdc" 
MNT="media/one" 
CHK_FILE="/var/lib/one/.done_context" 

if [ -b ${DEV_CD} ]; then
        mount -t iso9660 ${DEV_CD} ${MNT}
        if [ -f ${MNT}/context.sh ]; then
                . ${MNT}/init.sh
        fi
        umount ${MNT}
        if [ ! -e ${CHK_FILE} ]; then
                touch ${CHK_FILE}
                chattr +i ${CHK_FILE}
        fi
fi

exit 0
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
# Xen Virtual Ethernet
DEVICE=eth0
BOOTPROTO=static
ONBOOT=no
# vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=localhost.localdomain

- Default Runlevel is 3 (/etc/inittab)
- Auto Started Service is Compact (chkconfig & ntsysv)

# sed -i 's/^id:.*/id:3:initdefault:/g' /etc/inittab

# chkconfig --list | grep "3:on" 
acpid              0:off    1:off    2:on    3:on    4:on    5:on    6:off
cpuspeed           0:off    1:on    2:on    3:on    4:on    5:on    6:off
crond              0:off    1:off    2:on    3:on    4:on    5:on    6:off
haldaemon          0:off    1:off    2:off    3:on    4:on    5:on    6:off
iptables           0:off    1:off    2:on    3:on    4:on    5:on    6:off
irqbalance         0:off    1:off    2:on    3:on    4:on    5:on    6:off
messagebus         0:off    1:off    2:off    3:on    4:on    5:on    6:off
network            0:off    1:off    2:on    3:on    4:on    5:on    6:off
sshd               0:off    1:off    2:on    3:on    4:on    5:on    6:off
syslog             0:off    1:off    2:on    3:on    4:on    5:on    6:off

- Copy Template Image to Management Node

# scp /home/oneadmin/centos5_64.xen.img root@one-01:/home/oneadmin/

On Management Node

Setup : NFS Export (for VM_DIR)

# vi /etc/exports
/var/lib/one            192.168.100.122(rw,no_root_squash)
/usr/share/one/hooks    192.168.100.122(rw,no_root_squash)
# exportfs -avr

# chkconfig portmap on

# chkconfig nfs on

# chkconfig nfslock on

# service portmap start

# service nfs start

# service nfslock start

On Xen Node

Mount NFS

# echo "one-01:/var/lib/one/ /var/lib/one nfs defaults 0 0" >> /etc/fstab

# echo "one-01:/usr/share/one/hooks /usr/share/one/hooks nfs defaults 0 0" >> /etc/fstab

# mkdir -p /usr/share/one/hooks

# mount -a

On Management Node

Configuration OpenNebula (oned.conf)

# vi /etc/one/oned.conf
HOST_MONITORING_INTERVAL = 60

VM_POLLING_INTERVAL      = 60

SCRIPTS_REMOTE_DIR=/var/tmp/one

PORT=2633

DB = [ backend = "mysql", server = "localhost", port = "3306", user = "root", passwd = "1234", db_name = "one" ]

VNC_BASE_PORT = 5900

DEBUG_LEVEL=3

NETWORK_SIZE = 254

MAC_PREFIX   = "02:00" 

DEFAULT_IMAGE_TYPE    = "OS" 

DEFAULT_DEVICE_PREFIX = "xvd" 

IM_MAD = [
    name       = "im_xen",
    executable = "one_im_ssh",
    arguments  = "xen" ]

VM_MAD = [
    name       = "vmm_xen",
    executable = "one_vmm_ssh",
    arguments  = "xen",
    default    = "vmm_ssh/vmm_ssh_xen.conf",
    type       = "xen" ]

TM_MAD = [
    name       = "tm_nfs",
    executable = "one_tm",
    arguments  = "tm_nfs/tm_nfs.conf" ]

HM_MAD = [
    executable = "one_hm" ]

VM_HOOK = [
    name      = "image",
    on        = "DONE",
    command   = "image.rb",
    arguments = "$VMID" ]

VM_HOOK = [
    name      = "ebtables-start",
    on        = "running",
    command   = "/usr/share/one/hooks/ebtables-xen", # or ebtables-xen 
    arguments = "one-$VMID",
    remote    = "yes" ]

VM_HOOK = [
    name      = "ebtables-flush",
    on        = "done",
    command   = "/usr/share/one/hooks/ebtables-flush",
    arguments = "",
    remote    = "yes" ]
# service oned stop

# service oned start

Add Xen Node

# onehost create one-02 im_xen vmm_xen tm_nfs

..... wait a minute...

# onehost list
  ID NAME              CLUSTER  RVM   TCPU   FCPU   ACPU    TMEM    FMEM STAT
   0 one-02            default    1    200    196    100      2G    768M   on

Prepare Virtual Network Template

# vi /home/oneadmin/template.vnet.public
NAME = "Public" 
TYPE = "FIXED" 
BRIDGE = "xenbr0" 

LEASES = [ IP = "192.168.100.151" ]
LEASES = [ IP = "192.168.100.152" ]
LEASES = [ IP = "192.168.100.153" ]
LEASES = [ IP = "192.168.100.154" ]
LEASES = [ IP = "192.168.100.155" ]
LEASES = [ IP = "192.168.100.156" ]
LEASES = [ IP = "192.168.100.157" ]
LEASES = [ IP = "192.168.100.158" ]
LEASES = [ IP = "192.168.100.159" ]
LEASES = [ IP = "192.168.100.160" ]

Cretate Virtual Network

# onevnet create /home/oneadmin/template.vnet.public

# onevnet list
  ID USER     NAME              TYPE BRIDGE P #LEASES
   1 oneadmin Public           Fixed xenbr0 N       1

Prepare Virtual Machine Template

# vi /home/oneadmin/template.vm.centos5_64
CPU    = 1

MEMORY = 256

OS = [ bootloader = "/usr/bin/pygrub" ]

#DISK  = [
#  IMAGE ="Xen-CentOS5_64",
#  target = "xvda" ]

DISK = [
  source   = "/home/oneadmin/centos5_64.xen.img",
  target   = "xvda",
  readonly = "no" ]

DISK  = [
  type = "swap",
  size = "1024",
  target = "xvdb" ]

DISK  = [
  type = "fs",
  format = "ext3",
  size = "20480",
  target = "xvdd" ]

NIC = [ NETWORK="Public" ]

GRAPHICS = [ 
  type    = "vnc",              
  listen  = "127.0.0.1" ]

CONTEXT = [
  files = "/home/oneadmin/init.sh /home/oneadmin/.ssh/id_rsa.pub /home/oneadmin/setup-network.sh",
  root_pubkey = "id_rsa.pub",
  VMID = "$VMID",
  target = "xvdc" ]

Create Default Context Files

# vi /home/oneadmin/init.sh
#!/bin/bash

# -------------------------------------------------------------------------- #
# Copyright 2002-2009, Distributed Systems Architecture Group, Universidad   #
# Complutense de Madrid (dsa-research.org)                                   #
#                                                                            #
# Licensed under the Apache License, Version 2.0 (the "License"); you may    #
# not use this file except in compliance with the License. You may obtain    #
# a copy of the License at                                                   #
#                                                                            #
# http://www.apache.org/licenses/LICENSE-2.0                                 #
#                                                                            #
# Unless required by applicable law or agreed to in writing, software        #
# distributed under the License is distributed on an "AS IS" BASIS,          #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.   #
# See the License for the specific language governing permissions and        #
# limitations under the License.                                             #
#--------------------------------------------------------------------------- #

CHK_FILE="/var/lib/one/.done_context" 

/sbin/iptables -I INPUT -p icmp -j REJECT

if [ ! -e ${CHK_FILE} ]; then
    ## Random Password for root
#   dd if=/dev/urandom count=128 2>/dev/null | md5sum | passwd --stdin root

    MNT="/media/one" 

    if [ -f ${MNT}/context.sh ]; then
      . ${MNT}/context.sh
    fi

    fdisk /dev/xvdb << EOF
n
p
1

t
82
w
EOF
    if [ -b /dev/xvdb1 ]; then
        mkswap /dev/xvdb1
        echo "/dev/xvdb1 swap swap defaults 0 0" >> /etc/fstab
        swapon -a
    fi

    #hostname $HOSTNAME
    #sed -i "/HOSTNAME=/s/=.*$/=$HOSTNAME/" /etc/sysconfig/network

    #if [ -n "$IP_PUBLIC" ]; then
    #   ifconfig eth0 $IP_PUBLIC
    #fi

    #if [ -n "$NETMASK" ]; then
    #   ifconfig eth0 netmask $NETMASK
    #fi

    if [ -f ${MNT}/$ROOT_PUBKEY ]; then
        mkdir -p /root/.ssh
        cat ${MNT}/$ROOT_PUBKEY >> /root/.ssh/authorized_keys
        chmod -R 600 /root/.ssh/
    fi

    if [ -n "$USERNAME" ]; then
        useradd $USERNAME
        if [ -f ${MNT}/$USER_PUBKEY ]; then
            mkdir -p /home/$USERNAME/.ssh/
            cat ${MNT}/$USER_PUBKEY >> /home/$USERNAME/.ssh/authorized_keys
            chown -R $USERNAME:$USERNAME /home/$USERNAME/.ssh
            chmod -R 600 /home/$USERNAME/.ssh/authorized_keys
        fi
    fi
fi

### Network Setup
if [ -f ${MNT}/setup-network.sh ]; then
    ${MNT}/setup-network.sh
fi

if [ ! -e ${CHK_FILE} ]; then
    ### Run Deploy
    if [ -f ${MNT}/deploy*/setup.sh ]; then
        ${MNT}/deploy*/setup.sh
    fi
fi

## Etc Jobs

### VMS init Setup END
/sbin/iptables -D INPUT -p icmp -j REJECT
# vi /home/oneadmin/setup-network.sh
#!/bin/bash
#
# chkconfig: 2345 10 90
# description:  network reconfigure
#                                                     
# -------------------------------------------------------------------------- #
# Copyright 2002-2009, Distributed Systems Architecture Group, Universidad   #
# Complutense de Madrid (dsa-research.org)                                   #
#                                                                            #
# Licensed under the Apache License, Version 2.0 (the "License"); you may    #
# not use this file except in compliance with the License. You may obtain    #
# a copy of the License at                                                   #
#                                                                            #
# http://www.apache.org/licenses/LICENSE-2.0                                 #
#                                                                            #
# Unless required by applicable law or agreed to in writing, software        #
# distributed under the License is distributed on an "AS IS" BASIS,          #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.   #
# See the License for the specific language governing permissions and        #
# limitations under the License.                                             #
#--------------------------------------------------------------------------- #

# Gets IP address from a given MAC
mac2ip() {
    mac=$1

    let ip_a=0x`echo $mac | cut -d: -f 3`
    let ip_b=0x`echo $mac | cut -d: -f 4`
    let ip_c=0x`echo $mac | cut -d: -f 5`
    let ip_d=0x`echo $mac | cut -d: -f 6`

    ip="$ip_a.$ip_b.$ip_c.$ip_d" 

    echo $ip
}

# Gets the network part of an IP
get_network() {
    IP=$1

    echo $IP | cut -d'.' -f1,2,3
}

get_interfaces() {
    IFCMD="/sbin/ifconfig -a" 

    $IFCMD | grep ^eth | sed 's/ *Link encap:Ethernet.*HWaddr /-/g'
}

get_dev() {
    echo $1 | cut -d'-' -f 1
}

get_mac() {
    echo $1 | cut -d'-' -f 2
}

gen_hosts() {
    NETWORK=$1
    echo "127.0.0.1 localhost" 
    for n in `seq -w 01 99`; do
        n2=`echo $n | sed 's/^0*//'`
        echo ${NETWORK}.$n2 cluster${n}
    done
}

gen_exports() {
    NETWORK=$1
    echo "/images ${NETWORK}.0/255.255.255.0(rw,async,no_subtree_check)" 
}

gen_hostname() {
    MAC=$1
    #NUM=`mac2ip $MAC | cut -d'.' -f4`
    NUM=`mac2ip $MAC`
    #NUM2=`echo 000000$NUM | sed 's/.*\(..\)/\1/'`
    #echo cluster$NUM2
    NUM2=`echo $NUM | sed 's/\./-/g'`
    echo $NUM2
}

gen_interface() {
 DEV_MAC=$1
 DEV=`get_dev $DEV_MAC`
 MAC=`get_mac $DEV_MAC`
 IP=`mac2ip $MAC`
 NETWORK=`get_network $IP`

cat <<EOT
DEVICE=$DEV
BOOTPROTO=none
HWADDR=$MAC
ONBOOT=yes
TYPE=Ethernet
NETMASK=255.255.255.0
IPADDR=$IP
EOT

    if [ $DEV == "eth0" ]; then
      echo " GATEWAY=$NETWORK.1" 
    fi

echo "" 
}

IFACES=`get_interfaces`

for i in $IFACES; do
    DEV=`get_dev $i`
  gen_interface $i > /etc/sysconfig/network-scripts/ifcfg-${DEV}
done

# gen_hosts $NETWORK > /etc/hosts
# gen_exports $NETWORK  > /etc/exports
# gen_hostname $MAC  > /etc/hostname

#ifdown $DEV
#ifup $DEV

MNT="/media/one" 
if [ -f ${MNT}/context.sh ]; then
  . ${MNT}/context.sh
fi

HOSTNAME=ONE-${VMID}-`gen_hostname $MAC`

hostname $HOSTNAME
sed -i "/HOSTNAME=/s/=.*$/=$HOSTNAME/" /etc/sysconfig/network

if [ -n "$DNS" ]; then
    for dns in $DNS
    do
        echo "nameserver $dns" > /etc/resolv.conf
    done
else
    echo "nameserver 168.126.63.1
nameserver 168.126.63.2" > /etc/resolv.conf
fi

service network restart

Creating VM and Connect SSH

Create VM

# onevm create /home/oneadmin/template.vm.centos5_64

..... wait a minute...

# onevm list
   ID     USER     NAME STAT CPU     MEM        HOSTNAME        TIME
   37 oneadmin   one-37 runn   0    256M          one-02 07 04:40:21
# onevm show 37
VIRTUAL MACHINE 37 INFORMATION                                                  
ID             : 37                  
NAME           : one-37              
STATE          : ACTIVE              
LCM_STATE      : RUNNING             
START TIME     : 07/24 18:38:40      
END TIME       : -                   
DEPLOY ID:     : one-37              

VIRTUAL MACHINE MONITORING                                                      
NET_RX         : 11                  
USED MEMORY    : 262144              
USED CPU       : 0                   
NET_TX         : 6887                

VIRTUAL MACHINE TEMPLATE                                                        
CONTEXT=[
  FILES=/home/oneadmin/init.sh /home/oneadmin/.ssh/id_rsa.pub /home/oneadmin/setup-network.sh,
  ROOT_PUBKEY=id_rsa.pub,
  TARGET=xvdc,
  VMID=37 ]
CPU=1
DISK=[
  CLONE=YES,
  DISK_ID=0,
  IMAGE=Xen-CentOS5_64,
  IMAGE_ID=2,
  READONLY=NO,
  SAVE=NO,
  SOURCE=/var/lib/one//images/334f3324668b29bd253c7d304e499576ede0b611,
  TARGET=xvda,
  TYPE=DISK ]
DISK=[
  DISK_ID=1,
  SIZE=1024,
  TARGET=xvdb,
  TYPE=swap ]
DISK=[
  DISK_ID=2,
  FORMAT=ext3,
  SIZE=20480,
  TARGET=xvdd,
  TYPE=fs ]
GRAPHICS=[
  LISTEN=127.0.0.1,
  PORT=5937,
  TYPE=vnc ]
MEMORY=256
NAME=one-37
NIC=[
  BRIDGE=xenbr0,
  IP=192.168.100.154,
  MAC=02:00:c0:a8:64:9a,
  NETWORK=Public,
  NETWORK_ID=1 ]
OS=[
  BOOTLOADER=/usr/bin/pygrub ]
VMID=37

Connect to VM's SSH

# ssh -i /home/oneadmin/.ssh/id_rsa root@192.168.100.154
Warning: Permanently added '192.168.100.154' (RSA) to the list of known hosts.
Last login: Sun Jul 31 22:35:14 2011 from 192.168.100.121
[root@ONE-37-192-168-100-154 ~]#

Some Info on VM

[root@ONE-37-192-168-100-154 ~]# cat /proc/cpuinfo 
processor    : 0
vendor_id    : GenuineIntel
cpu family    : 6
model        : 23
model name    : Intel(R) Core(TM)2 Quad CPU    Q9550  @ 2.83GHz
stepping    : 10
cpu MHz        : 2826.250
cache size    : 6144 KB
fpu        : yes
fpu_exception    : yes
cpuid level    : 13
wp        : yes
flags        : fpu tsc msr pae cx8 apic cmov pat clflush acpi mmx fxsr sse sse2 ss syscall nx lm constant_tsc pni ssse3 cx16 sse4_1 lahf_lm
bogomips    : 7097.80
clflush size    : 64
cache_alignment    : 64
address sizes    : 40 bits physical, 48 bits virtual
power management:

[root@ONE-37-192-168-100-154 ~]# free -m
             total       used       free     shared    buffers     cached
Mem:           256        252          3          0         45        145
-/+ buffers/cache:         61        194
Swap:         1019          0       1019

[root@ONE-37-192-168-100-154 ~]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:00:C0:A8:64:9A  
          inet addr:192.168.100.154  Bcast:192.168.100.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:50726 errors:0 dropped:0 overruns:0 frame:0
          TX packets:146 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:7059149 (6.7 MiB)  TX bytes:17642 (17.2 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:700 (700.0 b)  TX bytes:700 (700.0 b)

[root@ONE-37-192-168-100-154 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.100.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0
0.0.0.0         192.168.100.1   0.0.0.0         UG    0      0        0 eth0

[root@ONE-37-192-168-100-154 ~]# fdisk -l

Disk /dev/xvda: 1572 MB, 1572864000 bytes
255 heads, 63 sectors/track, 191 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *           1          13      104391   83  Linux
/dev/xvda2              14         191     1429785   83  Linux

Disk /dev/xvdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot      Start         End      Blocks   Id  System
/dev/xvdb1               1         130     1044193+  82  Linux swap / Solaris

Disk /dev/xvdd: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/xvdd doesn't contain a valid partition table

Disk /dev/xvdc: 0 MB, 382976 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/xvdc doesn't contain a valid partition table

[root@ONE-37-192-168-100-154 ~]# mount
/dev/xvda2 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/xvda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/xvdd on /mnt type ext3 (rw)

[root@ONE-37-192-168-100-154 ~]# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/xvda2    ext3    1.4G  995M  289M  78% /
/dev/xvda1    ext3     99M   14M   81M  14% /boot
tmpfs        tmpfs    128M     0  128M   0% /dev/shm

[END]

Posted by 사랑줍는거지
,

 http://redmine.nehome.net/redmine/projects/cloudstack/wiki/Install_-_CloudStack_CE_21x_ManagementComputing_Node 



[Management Server(CentOS 5.x)]
(JDK install/setting)

hostname is FQDN
Edit /etc/hosts : All node
Setup DNS Server (Option) 
yum -y install mysql-server
sed -i '/^[mysqld]/a innodb_lock_wait_timeout=600' /etc/my.cnf
sed -i '/^[mysqld]/a innodb_rollback_on_timeout=1' /etc/my.cnf
service mysqld start
chkconfig mysqld on
mysqladmin -uroot password '1234'
wget http://download.cloud.com/foss/centos/cloud.repo -O /etc/yum.repos.d/cloud.repo
yum clean all
yum -y install cloud-console-proxy
yum -y install cloud-client
cloud-setup-databases cloud:1234@localhost kvm --deploy-as=root:1234
virsh net-destroy default
rm -f /etc/libvirt/qemu/networks/default.xml
service dnsmasq stop
service libvirtd stop
chkconfig dnsmasq off
chkconfig libvirtd off
cloud-setup-management
reboot

----------------

[Computing Nnode]

wget http://download.cloud.com/foss/centos/cloud.repo -O /etc/yum.repos.d/cloud.repo
yum clean all
yum -y install cloud-agent
cloud-setup-agent
cloud-setup-agent --no-kvm (not use kvm module)
virsh net-destroy default
rm -f /etc/libvirt/qemu/networks/default.xml
(on CentOS)
#iptables -I INPUT -i cloud0 -j ACCEPT  
#iptables -I FORWARD -i cloud0 -o cloud0 -j ACCEPT  
#iptables -I FORWARD -i cloudbr0 -o cloudbr0 -j ACCEPT 
#iptables -I INPUT -m tcp -p tcp --dport 5900:6100 ?j ACCEPT 
(on Fedora)
#iptables -I FORWARD -i cloudbr0 -o cloudbr0 -j ACCEPT 
#iptables -I INPUT -m tcp -p tcp --dport 5900:6100 ?j ACCEPT
service iptables save
reboot

'Cloud' 카테고리의 다른 글

CloudStack + SDN  (0) 2012.05.01
Install OpenNebula-2.2-1 with Xen  (0) 2012.03.18
CloudStack 2.2.13 with XenServer (on ESXi)  (0) 2012.02.25
정말 AWS가 클라우드의 표준(Standard)은 아니잖아~  (0) 2012.02.07
Cloud.... 'Beta'는 없다.  (0) 2012.02.04
Posted by 사랑줍는거지
,