Creating vm with stemcell failed.... No valid host was found. There are not enough hosts available..Filter ImagePropertiesFilter returned 0 hosts


Arpit Sharma
 

Hi Johannes,

Yesterday I have logged ticket for the same. I received response from Mr. Mauro Morales that we can try to repack the stemcell with bosh-cli. After that I have tried with this idea and followed this link
https://bosh.io/docs/repack-stemcell.html
finally I successed in changing hypervisor type to qemu. Now I am able to launch VM with image. I have reduced flavor from x1.large to m1.medium. But I am again getting new error

Started deploying
Creating VM for instance 'bosh/0' from stemcell '197a22a9-0bc1-4365-9a43-035b0983179c'... Finished (00:01:15)
Waiting for the agent on VM 'b8db1076-2fd3-4d5e-a42b-2f539dc73468' to be ready... Finished (00:08:26)
Creating disk... Failed (00:00:01)
Failed deploying (00:09:46)

Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)

Deploying:
Creating instance 'bosh/0':
Updating instance disks:
Updating disks:
Deploying disk:
Creating disk with size 32768, cloudProperties property.Map{}, instanceID b8db1076-2fd3-4d5e-a42b-2f539dc73468:
CPI 'create_disk' method responded with error: CmdError{"type":"Bosh::Clouds::CloudError","message":"Volume `f303c75a-829b-4206-814c-f0adb913a581' state is error, expected available","ok_to_retry":false}

Exit code 1


Can you help me with this?


Arpit Sharma
 

I am also sure that this is due to cinder insufficient space as I have seen logs. Can I choose small flavor for the same?


Arpit Sharma
 

I have reduced size from 32 GB to 22 GB. Let me try with this.


Johannes Hiemer
 

Hi Arpit,
can you see the volume being created in OS?

On Thu, 13 Jul 2017 at 15:38 Arpit Sharma <arpitvipulsharma(a)gmail.com>
wrote:

Hi Johannes,

Yesterday I have logged ticket for the same. I received response from Mr.
Mauro Morales that we can try to repack the stemcell with bosh-cli. After
that I have tried with this idea and followed this link
https://bosh.io/docs/repack-stemcell.html
finally I successed in changing hypervisor type to qemu. Now I am able to
launch VM with image. I have reduced flavor from x1.large to m1.medium. But
I am again getting new error

Started deploying
Creating VM for instance 'bosh/0' from stemcell
'197a22a9-0bc1-4365-9a43-035b0983179c'... Finished (00:01:15)
Waiting for the agent on VM 'b8db1076-2fd3-4d5e-a42b-2f539dc73468' to be
ready... Finished (00:08:26)
Creating disk... Failed (00:00:01)
Failed deploying (00:09:46)

Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)

Deploying:
Creating instance 'bosh/0':
Updating instance disks:
Updating disks:
Deploying disk:
Creating disk with size 32768, cloudProperties property.Map{},
instanceID b8db1076-2fd3-4d5e-a42b-2f539dc73468:
CPI 'create_disk' method responded with error:
CmdError{"type":"Bosh::Clouds::CloudError","message":"Volume
`f303c75a-829b-4206-814c-f0adb913a581' state is error, expected
available","ok_to_retry":false}

Exit code 1


Can you help me with this?


Arpit Sharma
 

No. Volume is not created on cinder level. Checked schedular logs and found that there is no enough host available. I have tried manually to create disk on cinder but unable to do it. I have checked and found that base machine havin around 5gb left. I think due to this I am facing issue. Let me provide some space to cinder and will try again.


Arpit Sharma
 

Hi Johannes,

Resolved this volume issue. There was not enough storage at OS. I have reduced size of Volume(from 32GB to 18 GB). Director
Server having 2vCPU with 4GB RAM. Now I am getting this error at last stage...

Started deploying
Creating VM for instance 'bosh/0' from stemcell '875c81c1-fb72-4ff2-9925-46d74db0fdb4'... Finished (00:00:41)
Waiting for the agent on VM '9b94d14a-41a5-4305-a340-48e7100aec44' to be ready... Finished (00:08:07)
Creating disk... Finished (00:00:06)
Attaching disk 'e8c492a4-1f60-40df-bed8-a9207330f5c1' to VM '9b94d14a-41a5-4305-a340-48e7100aec44'... Finished (00:00:21)
Rendering job templates... Finished (00:00:02)
Compiling package 'ruby/c1086875b047d112e46756dcb63d8f19e63b3ac4'... Skipped [Package already compiled] (00:00:02)
Compiling package 'mysql/b7e73acc0bfe05f1c6cbfd97bf92d39b0d3155d5'... Skipped [Package already compiled] (00:00:02)
Compiling package 'libpq/661f5817afe24fa2f18946d2757bff63246b1d0d'... Skipped [Package already compiled] (00:00:00)
Compiling package 'ruby_openstack_cpi/6576c0d52231e773f4ad53f5c5a0785c4247696a'... Finished (00:49:27)
Compiling package 'postgres-9.4/ded764a075ae7513d4718b7cf200642fdbf81ae4'... Skipped [Package already compiled] (00:00:01)
Compiling package 'nginx/2ec2f63293bf6f544e95969bf5e5242bc226a800'... Skipped [Package already compiled] (00:00:00)
Compiling package 'registry/d81865cf0ad85fd79cb19aeb565bf622f2a17a83'... Skipped [Package already compiled] (00:00:04)
Compiling package 'davcli/5f08f8d5ab3addd0e11171f739f072b107b30b8c'... Skipped [Package already compiled] (00:00:00)
Compiling package 'health_monitor/e9317b2ad349f019e69261558afa587537f06f25'... Skipped [Package already compiled] (00:00:03)
Compiling package 'nats/63ae42eb73527625307ff522fb402832b407321d'... Skipped [Package already compiled] (00:00:01)
Compiling package 'bosh_openstack_cpi/918abecbb3015ee383d5cb2af23e8dbfed6392d1'... Finished (00:00:28)
Compiling package 'director/e9cd35786422e87bd0571a4423bc947e50fe97e6'... Skipped [Package already compiled] (00:00:05)
Compiling package 'postgres/3b1089109c074984577a0bac1b38018d7a2890ef'... Skipped [Package already compiled] (00:00:01)
Compiling package 's3cli/bb1c1976d221fdadf13a6bc873896cd5e2433580'... Skipped [Package already compiled] (00:00:00)
Compiling package 'verify_multidigest/8fc5d654cebad7725c34bb08b3f60b912db7094a'... Skipped [Package already compiled] (00:00:00)
Updating instance 'bosh/0'... Finished (00:01:00)
Waiting for instance 'bosh/0' to be running... Failed (00:07:55)
Failed deploying (01:08:40)

Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)

Deploying:
Received non-running job state: 'failing'



Do you think this is due to less resources to director VM?


Johannes Hiemer
 

Hi Arpit,
two important things:

- I would not proceed with an installation, if you already run into disk
capacity issues now. A minimal useful installation consists of at least 10
- 12 VMs an will consume much more resources
- The error you see is perhaps based on missing Security Groups? Are you
able to ssh into the director via ssh -i keyfile.pem vcap(a)ip-address?


Best,
Johannes

On Fri, 14 Jul 2017 at 10:34 Arpit Sharma <arpitvipulsharma(a)gmail.com>
wrote:

Hi Johannes,

Resolved this volume issue. There was not enough storage at OS. I have
reduced size of Volume(from 32GB to 18 GB). Director
Server having 2vCPU with 4GB RAM. Now I am getting this error at last
stage...

Started deploying
Creating VM for instance 'bosh/0' from stemcell
'875c81c1-fb72-4ff2-9925-46d74db0fdb4'... Finished (00:00:41)
Waiting for the agent on VM '9b94d14a-41a5-4305-a340-48e7100aec44' to be
ready... Finished (00:08:07)
Creating disk... Finished (00:00:06)
Attaching disk 'e8c492a4-1f60-40df-bed8-a9207330f5c1' to VM
'9b94d14a-41a5-4305-a340-48e7100aec44'... Finished (00:00:21)
Rendering job templates... Finished (00:00:02)
Compiling package 'ruby/c1086875b047d112e46756dcb63d8f19e63b3ac4'...
Skipped [Package already compiled] (00:00:02)
Compiling package 'mysql/b7e73acc0bfe05f1c6cbfd97bf92d39b0d3155d5'...
Skipped [Package already compiled] (00:00:02)
Compiling package 'libpq/661f5817afe24fa2f18946d2757bff63246b1d0d'...
Skipped [Package already compiled] (00:00:00)
Compiling package
'ruby_openstack_cpi/6576c0d52231e773f4ad53f5c5a0785c4247696a'... Finished
(00:49:27)
Compiling package
'postgres-9.4/ded764a075ae7513d4718b7cf200642fdbf81ae4'... Skipped [Package
already compiled] (00:00:01)
Compiling package 'nginx/2ec2f63293bf6f544e95969bf5e5242bc226a800'...
Skipped [Package already compiled] (00:00:00)
Compiling package 'registry/d81865cf0ad85fd79cb19aeb565bf622f2a17a83'...
Skipped [Package already compiled] (00:00:04)
Compiling package 'davcli/5f08f8d5ab3addd0e11171f739f072b107b30b8c'...
Skipped [Package already compiled] (00:00:00)
Compiling package
'health_monitor/e9317b2ad349f019e69261558afa587537f06f25'... Skipped
[Package already compiled] (00:00:03)
Compiling package 'nats/63ae42eb73527625307ff522fb402832b407321d'...
Skipped [Package already compiled] (00:00:01)
Compiling package
'bosh_openstack_cpi/918abecbb3015ee383d5cb2af23e8dbfed6392d1'... Finished
(00:00:28)
Compiling package 'director/e9cd35786422e87bd0571a4423bc947e50fe97e6'...
Skipped [Package already compiled] (00:00:05)
Compiling package 'postgres/3b1089109c074984577a0bac1b38018d7a2890ef'...
Skipped [Package already compiled] (00:00:01)
Compiling package 's3cli/bb1c1976d221fdadf13a6bc873896cd5e2433580'...
Skipped [Package already compiled] (00:00:00)
Compiling package
'verify_multidigest/8fc5d654cebad7725c34bb08b3f60b912db7094a'... Skipped
[Package already compiled] (00:00:00)
Updating instance 'bosh/0'... Finished (00:01:00)
Waiting for instance 'bosh/0' to be running... Failed (00:07:55)
Failed deploying (01:08:40)

Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)

Deploying:
Received non-running job state: 'failing'



Do you think this is due to less resources to director VM?


Arpit Sharma
 

Hi Johannes,

I am agree with you. I knew that after director deployment, I need to deploy more then 12 VM's. This will definitely require better hardware. Our team will take some time to arrange this hardware. Meanwhile I just want to complete this director implementation. Once it will complete, I will start to work on new hardware.

Yes, I am able to do ssh into director from OS machine.


Johannes Hiemer
 

Do a sudo su with c1oudc0w and then the following:

sudo su
[sudo] password for vcap:
root(a)dc1a850b-184b-4189-47e8-68db05d91bdd:/home/vcap# monit summary
The Monit daemon 5.2.5 uptime: 9d 2h 7m

Process 'nats' running
Process 'postgres' running
Process 'blobstore_nginx' running
Process 'director' running
Process 'worker_1' running
Process 'worker_2' running
Process 'worker_3' running
Process 'director_scheduler' running
Process 'director_nginx' running
Process 'health_monitor' running
Process 'registry' running
System 'system_localhost' running

This is how it should look like.

On Fri, 14 Jul 2017 at 10:54 Arpit Sharma <arpitvipulsharma(a)gmail.com>
wrote:

Hi Johannes,

I am agree with you. I knew that after director deployment, I need to
deploy more then 12 VM's. This will definitely require better hardware. Our
team will take some time to arrange this hardware. Meanwhile I just want to
complete this director implementation. Once it will complete, I will start
to work on new hardware.

Yes, I am able to do ssh into director from OS machine.


Arpit Sharma
 

Hi Johannes,

Unable to understand you. When I will do sudo for root from vcap user. It is asking for vcap user password. Where I can get this password?


Arpit Sharma
 

I am unable to login with this password.


Johannes Hiemer
 

As I wrote: c1oudc0w

Mit freundlichen Grüßen

Johannes Hiemer

On 14. Jul 2017, at 11:54, Arpit Sharma <arpitvipulsharma(a)gmail.com> wrote:

Hi Johannes,

Unable to understand you. When I will do sudo for root from vcap user. It is asking for vcap user password. Where I can get this password?


Arpit Sharma
 

I am doing with same password c1oudc0w.

But unable to login.

bosh/0:~$ sudo su
[sudo] password for vcap:
Sorry, try again.
[sudo] password for vcap:
Sorry, try again.
[sudo] password for vcap:
Sorry, try again.
sudo: 3 incorrect password attempts
bosh/0:~$ sudo su -
[sudo] password for vcap:
Sorry, try again.
[sudo] password for vcap:


Johannes Hiemer
 

But you could login with ssh -i key vcap(a)ip?

On 14. Jul 2017, at 12:13, Arpit Sharma <arpitvipulsharma(a)gmail.com> wrote:

I am unable to login with this password.


Arpit Sharma
 

yes...I loged in director via OS cli with private key using ssh. Maybe I have done wrong password attemp more then 3. let me create director once again.


Arpit Sharma
 

Hey Johannes,


Still same issue. I am able to login in instance via ssh. it is not taking password.

[root(a)openstack ~]# ssh -i /root/.ssh/id_rsa_demokey vcap(a)10.100.10.23
The authenticity of host '10.100.10.23 (10.100.10.23)' can't be established.
ECDSA key fingerprint is 9c:00:0a:b9:24:11:a3:5c:89:84:73:4f:38:66:39:d4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.100.10.23' (ECDSA) to the list of known hosts.
Unauthorized use is strictly prohibited. All access and activity
is subject to logging and monitoring.
Last login: Fri Jul 14 12:31:29 2017
bosh/0:~$ sudo su
[sudo] password for vcap:
Sorry, try again.
[sudo] password for vcap:
Sorry, try again.
[sudo] password for vcap:
sudo: pam_authenticate: Conversation error
bosh/0:~$


Johannes Hiemer
 

Hi Arpit,
the password is c1oudc0w and 0 ist zero not O. That's the one you used?

On Fri, 14 Jul 2017 at 14:42 Arpit Sharma <arpitvipulsharma(a)gmail.com>
wrote:

Hey Johannes,


Still same issue. I am able to login in instance via ssh. it is not taking
password.

[root(a)openstack ~]# ssh -i /root/.ssh/id_rsa_demokey vcap(a)10.100.10.23
The authenticity of host '10.100.10.23 (10.100.10.23)' can't be
established.
ECDSA key fingerprint is 9c:00:0a:b9:24:11:a3:5c:89:84:73:4f:38:66:39:d4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.100.10.23' (ECDSA) to the list of known
hosts.
Unauthorized use is strictly prohibited. All access and activity
is subject to logging and monitoring.
Last login: Fri Jul 14 12:31:29 2017
bosh/0:~$ sudo su
[sudo] password for vcap:
Sorry, try again.
[sudo] password for vcap:
Sorry, try again.
[sudo] password for vcap:
sudo: pam_authenticate: Conversation error
bosh/0:~$


Arpit Sharma
 

Yes Johannes, I am using c1oudc0w. But still same issue. I am also surprised here.


Arpit Sharma
 

Hi Johannes,

Today I have tried with other stemcell(bosh-openstack-kvm-ubuntu-trusty-go_agent-raw). Also created different security group.I have also mentiond security group rules below. But still same issue. I am able to login in director with vcap user with this command
ssh -i /root/.ssh/id_rsa_demokey vcap(a)10.100.10.23

but when I am trying to excute "sudo su -", It is not taking password as c1oudc0w. I dont know why it is happening.

[root(a)openstack ~(keystone_demo)]# neutron security-group-list
+--------------------------------------+-----------+-----------------------------------------------------------------------------------+
| id | name | security_group_rules |
+--------------------------------------+-----------+-----------------------------------------------------------------------------------+
| bb412056-6f4e-40d9-a48f-8b1c5c4068eb | boshgroup | egress, IPv4 |
| | | egress, IPv6 |
| | | ingress, IPv4, 1-65535/tcp, remote_group_id: bb412056-6f4e-40d9-a48f-8b1c5c4068eb |
| | | ingress, IPv4, 22/tcp, remote_ip_prefix: 0.0.0.0/0 |
| | | ingress, IPv4, 25555/tcp, remote_ip_prefix: 0.0.0.0/0 |
| | | ingress, IPv4, 6868/tcp, remote_ip_prefix: 0.0.0.0/0 |


Tushar Dadlani
 

The new expected behavior is to only allow you to become root if you use
the bosh CLI to perform your ssh since it creates a better audit trails and
prevents unauthorized ssh.

http://bosh.io/jobs/director?source=github.com/cloudfoundry/bosh&version=262.3#p=director.generate_vm_passwords

If the generate_vm_passwords option is set to be true you don't get the
default password on your VM hosts.

Best,
Tushar
On Mon, Jul 17, 2017 at 4:27 AM Arpit Sharma <arpitvipulsharma(a)gmail.com>
wrote:

Hi Johannes,

Today I have tried with other
stemcell(bosh-openstack-kvm-ubuntu-trusty-go_agent-raw). Also created
different security group.I have also mentiond security group rules below.
But still same issue. I am able to login in director with vcap user with
this command
ssh -i /root/.ssh/id_rsa_demokey vcap(a)10.100.10.23

but when I am trying to excute "sudo su -", It is not taking password as
c1oudc0w. I dont know why it is happening.

[root(a)openstack ~(keystone_demo)]# neutron security-group-list

+--------------------------------------+-----------+-----------------------------------------------------------------------------------+
| id | name | security_group_rules
|

+--------------------------------------+-----------+-----------------------------------------------------------------------------------+
| bb412056-6f4e-40d9-a48f-8b1c5c4068eb | boshgroup | egress, IPv4
|
| | | egress, IPv6
|
| | | ingress, IPv4,
1-65535/tcp, remote_group_id: bb412056-6f4e-40d9-a48f-8b1c5c4068eb |
| | | ingress, IPv4,
22/tcp, remote_ip_prefix: 0.0.0.0/0 |
| | | ingress, IPv4,
25555/tcp, remote_ip_prefix: 0.0.0.0/0 |
| | | ingress, IPv4,
6868/tcp, remote_ip_prefix: 0.0.0.0/0 |