Date   

bosh aws create in EU-WEST-1

Riaan Labuschagne <zapmanandroid@...>
 

Hi,

I am deploying to AWS en Ireland (eu-west-1) and everything goes well until
I hit this error:

Executing migration CreateRdsDbs
details in S3 receipt: aws_rds_receipt and file: aws_rds_receipt.yml
/var/lib/gems/2.1.0/gems/aws-sdk-v1-1.60.2/lib/aws/core/client.rb:375:in
`return_or_raise': Cannot create a db.t1.micro Multi-AZ instance because no
subnets exist in availability zones with sufficient capacity for VPC and
storage type : standard for db.t1.micro. Please first create at least 2 new
subnets; choose from these availability zones: eu-west-1a, eu-west-1b,
eu-west-1c. (AWS::RDS::Errors::InvalidVPCNetworkStateFault)


I have no idea why this is?

Can someone please point me in the right direction.

Thank you.
Riaan


Failed to update api when deploy cf into aws

王小锋 <zzuwxf at gmail.com...>
 

Hi, there

I tried to deploy cf into aws using bosh, but it failed when updating api
job, I checked the log, and found following error:

I attach 100G disk to cloud_controller_ng vm, not sure why it failed with
"no space left on device", any idea? thanks.

[2015-10-09 11:51:12 #7253] [] DEBUG -- DirectorJobRunner: RECEIVED:
director.bcb35dd8-a609-49f3-aab5-c8eae54d2527.2b1e5e7e-ce92-4d91-b65a-f99b93a399e8
{"exception":{"message":"Action Failed get_task: Task
7f50c117-a3f6-4a89-69ec-843036a5e13a result: Applying: Applying job
cloud_controller_ng: Applying package buildpack_staticfile for job
cloud_controller_ng: Enabling package: failed to enable: symlink
/var/vcap/data/packages/buildpack_staticfile/5e2015a8345f2a650a524d1eece13f34acc72b87.1-a4f016c6f79669a734f6320fd3a15fc54fe8682c
/var/vcap/jobs/cloud_controller_ng/packages/buildpack_staticfile: no space
left on device"}}
E, [2015-10-09 11:51:12 #7253] [canary_update(api_z1/0)] ERROR --
DirectorJobRunner: Error updating canary instance:
#<Bosh::Director::RpcRemoteException: Action Failed get_task: Task
7f50c117-a3f6-4a89-69ec-843036a5e13a result: Applying: Applying job
cloud_controller_ng: Applying package buildpack_staticfile for job
cloud_controller_ng: Enabling package: failed to enable: symlink
/var/vcap/data/packages/buildpack_staticfile/5e2015a8345f2a650a524d1eece13f34acc72b87.1-a4f016c6f79669a734f6320fd3a15fc54fe8682c
/var/vcap/jobs/cloud_controller_ng/packages/buildpack_staticfile: no space
left on device>
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/agent_client.rb:231:in
`handle_method'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/agent_client.rb:286:in
`handle_message_with_retry'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/agent_client.rb:58:in
`method_missing'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/agent_client.rb:310:in
`get_task_status'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/agent_client.rb:155:in
`wait_for_task'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/agent_client.rb:299:in
`send_message'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/agent_client.rb:90:in
`apply'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/instance_updater.rb:168:in
`apply_state'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/instance_updater.rb:67:in
`block in update_steps'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/instance_updater.rb:88:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/instance_updater.rb:88:in
`block in update'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/instance_updater.rb:87:in
`each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/instance_updater.rb:87:in
`update'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/job_updater.rb:74:in
`block (2 levels) in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.3098.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/job_updater.rb:72:in
`block in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/event_log.rb:97:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3098.0/lib/bosh/director/event_log.rb:97:in
`advance_and_track'


Problems with item delivery, n.000752472

FedEx International Next Flight <alfred.norton@...>
 

Dear Customer,

Your parcel has arrived at October 08. Courier was unable to deliver the parcel to you.
You can review complete details of your order in the find attached.

Yours faithfully,
Alfred Norton,
Delivery Manager.


Re: cf backup and restore

Duncan Winn
 

Yes..

Back up your DB's, blobstore and config:

http://blog.pivotal.io/pivotal-cloud-foundry/features/restoring-pivotal-cloud-foundry-after-disaster



Duncan Winn
Pivotal CF Advisory Solutions Architect
Mobile: 415.579.5365 | Twitter: @duncwinn | Github:duncwinn

On Thu, Oct 8, 2015 at 2:23 AM, jinsenglin <jinsenglin(a)iii.org.tw> wrote:

Hi all



If I configure my cloud foundry to use robust external (1) ldap server for
uaa, (2)blob store, e.g., openstack swift, and (3)postgresql for ccdb, does
it mean that I can restore my cloud foundry after disaster?



If so, is there any guideline for this process?



Thanks all.



Jim @ III > Data Analytics Technology & Applications Research Institute
(DATA)


cf backup and restore

Jim
 

Hi all



If I configure my cloud foundry to use robust external (1) ldap server for
uaa, (2)blob store, e.g., openstack swift, and (3)postgresql for ccdb, does
it mean that I can restore my cloud foundry after disaster?



If so, is there any guideline for this process?



Thanks all.



Jim @ III > Data Analytics Technology & Applications Research Institute
(DATA)


Re: Cloud Computing

Allie Parker <allieparker@...>
 

Hello,



We maintain a list of Cloud Computing users we can provide you pre-packaged
and custom built Top Technology users mailing lists.



We can help you reach out to the below given users database.



Sage, MS Dynamics, SAP ERP, JD Edwards, EMC, Net App, IBM AS/400, Citrix,
Oracle, Sales force, BPCS & Infor, Lawson, PeopleSoft, IBM DBMS, Sybase,
PLM, VMware, Cisco and many more Users.



IT Decision Makers List such as: IT Manager, IT Director, Network Engineers,
System Administrator, and many more.



Please let me know your thoughts.



Await your response.



Allie Parker

Marketing Manager



Note: If this industry is not relevant to you please reply back with your
Target Market, we have all types of target market available.


Drain scripts now run for *each* release job on the VM

Dmitriy Kalinin
 

3093+ stemcell was released several days ago which fixed one shortcoming
about drain scripts. Now we run drain script for each release job in
parallel.

https://github.com/cloudfoundry/bosh/releases/tag/stable-3093


Re: bosh-lite stemcell - refresh?

Dmitriy Kalinin
 

You can build warden stemcells just like any other stemcell -- just have to
specify infrastructure as warden -- but there are some incompatibilities
with the agent which will make that stemcell fail on bootup. Marco and I
are planning to work on it this/next week and have it continuously
published.

Regarding kernel mismatch: I've run into this a few times when
transitioning from warden to garden and I think a way to fix this would be
to bind mount kernel modules directory into each containers from the host
so that regular tools can properly work, regardless if some kernel module
was loaded or not.

On Tue, Oct 6, 2015 at 6:42 PM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

I'm working on packaging up some software for bosh that requires a kernel
module and running into an issue.

The bosh stemcell for the warden cpi is very old relative to the kernel
that's actually using the container (and all other stemcells, really). When
I try to modprobe the module into the kernel from within a container, it
fails because the version of the kernel module in the stemcell is way older
than the kernel.

I'm currently adding a provisioning script to the Vagrant host to load the
module but it really shouldn't be necessary.

Can someone point to some instructions on how to construct the stemcell
for the warden CPI? Alternatively, is there a story to refresh it?

Thanks.

--
Matthew Sykes
matthew.sykes(a)gmail.com


bosh-lite stemcell - refresh?

Matthew Sykes <matthew.sykes@...>
 

I'm working on packaging up some software for bosh that requires a kernel
module and running into an issue.

The bosh stemcell for the warden cpi is very old relative to the kernel
that's actually using the container (and all other stemcells, really). When
I try to modprobe the module into the kernel from within a container, it
fails because the version of the kernel module in the stemcell is way older
than the kernel.

I'm currently adding a provisioning script to the Vagrant host to load the
module but it really shouldn't be necessary.

Can someone point to some instructions on how to construct the stemcell for
the warden CPI? Alternatively, is there a story to refresh it?

Thanks.

--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: Error: Public uaa token must be PEM encoded

André Moreira <andre.lcm at gmail.com...>
 

Did you manage to solve this? I'm facing the same issue.


Re: Error: Public uaa token must be PEM encoded

André Moreira <andre.lcm at gmail.com...>
 

Did you manage to solve this issue? I’m facing the same problem.


Delivery Notification, ID 000908268

FedEx International Economy <thomas.carey@...>
 

Dear Customer,

Courier was unable to deliver the parcel to you.
Please, download Delivery Label attached to this email.

Regards,
Thomas Carey,
Station Agent.


Re: [cf-dev] Re: proposed stemcell network performance tuning

Benjamin Black <bblack@...>
 

there are two problems:

1) certain load balancer versions and configurations have unexpected
behavior around tcp timestamps when there is a mix of windows and
non-windows clients (really a mix of timestamps and not-timestamps). the
result is the linux servers in cloud foundry sending resets long before
ports are exhausted.

2) to date, these parameters have been configured in an ad hoc fashion as
problems are encountered leading to a lot of variation in configuration
across the various cloud foundry components. the ad hoc solutions have in
some cases even been counter-productive: tcp_tw_recycle was previously
enabled on some components, exacerbating #1.

the changes amit proposes are not exhaustive, but rather conservative and
address exactly these two problems. other tuning might be beneficial to the
platform. such additional tuning is not required for these scenarios.


b

On Wed, Sep 30, 2015 at 6:05 PM, Joshua McKenty <jmckenty(a)pivotal.io> wrote:

Amit - I worry about changes to the former in the context of HTTP 1.0 and
1.1, especially without pipelining. What problem are you trying to solve?

If you’re having trouble initiating new sockets, there are other kernel
params we should adjust.


On Sep 29, 2015, at 5:17 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi all,

I'd like to propose tuning a couple kernel parameters related to tcp
performance:

# TCP_FIN_TIMEOUT
# This setting determines the time that must elapse before TCP/IP can
release a closed connection and reuse
# its resources. During this TIME_WAIT state, reopening the connection to
the client costs less than establishing
# a new connection. By reducing the value of this entry, TCP/IP can
release closed connections faster, making more
# resources available for new connections. Adjust this in the presence of
many connections sitting in the
# TIME_WAIT state:

echo 5 > /proc/sys/net/ipv4/tcp_fin_timeout

# TCP_TW_REUSE
# This allows reusing sockets in TIME_WAIT state for new connections when
it is safe from protocol viewpoint.
# Default value is 0 (disabled). It is generally a safer alternative to
tcp_tw_recycle

echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse

Currently, these parameters are set by certain jobs in cf-release,
diego-release, and perhaps others. Any VM needing to establish a high
number of incoming/outgoing tcp connections in a short period of time will
be unable to establish new connections without changing these parameters.

We believe these parameters are safe to change across the board, and will
be generally beneficial. The existing defaults made sense for much older
networks, but can be greatly optimized for modern systems.

Please share with the mailing lists if you have any questions or feedback
about this proposal. If you maintain a bosh release and would like to see
how these changes would affect your release, you can create a job which
simply does the above in its startup scripts, and colocate that job with
all the other jobs in a deployment of your release.

Thanks,

Amit Gupta
Cloud Foundry PM, OSS Release Integration team



Re: Using AWS temporary security credentials with bosh?

Dmitriy Kalinin
 

You have to use bosh-init to get this feature working.

Sent from my iPhone

On Oct 1, 2015, at 1:11 PM, Satya Thokachichu <tsnraju(a)yahoo.com> wrote:

bosh deployment work like a gem with IAM instance profiles...Having trouble with microbosh deployment..Please advise.


Re: Using AWS temporary security credentials with bosh?

Satya Thokachichu
 

bosh deployment work like a gem with IAM instance profiles...Having trouble with microbosh deployment..Please advise.


Domain type in PowerDNS table.

Angelo Albanese
 

Hi all
as far as you know, is there any specific reason why the Director enforce the domain creation in Postgres/PowerDNS to be of "NATIVE" type ?
https://github.com/cloudfoundry/bosh/blob/master/bosh-director/lib/bosh/director/dns_helper.rb#L104-L105

I'd like to use PowerDNS to notify a slave DNS factory (that consist of several BIND servers) and to update BINDs on each zone change/ record insert. I was able to do that manually changing the domain record to MASTER type in the DB. However the SEQUEL model enforce a check on the type NATIVE so when I deploy a new VM via bosh deploy I get an exception (see below). Personally I don't see any reason for that and code should limit the existence check to the domain name field only. Any insight is welcome.


D, [2015-10-01 15:04:26 #29455] [task:927] DEBUG -- DirectorJobRunner: (0.000840s) SELECT * FROM "domains" WHERE (("name" = 'microbosh') AND ("type" = 'NATIVE')) LIMIT 1
D, [2015-10-01 15:04:26 #29455] [task:927] DEBUG -- DirectorJobRunner: (0.000088s) BEGIN
E, [2015-10-01 15:04:26 #29455] [task:927] ERROR -- DirectorJobRunner: PG::Error: ERROR: duplicate key value violates unique constraint "domains_name_key"
DETAIL: Key (name)=(microbosh) already exists.: INSERT INTO "domains" ("name", "type") VALUES ('microbosh', 'NATIVE') RETURNING *
D, [2015-10-01 15:04:26 #29455] [task:927] DEBUG -- DirectorJobRunner: (0.000102s) ROLLBACK

Thx
Angelo


Re: Using AWS temporary security credentials with bosh?

Satya Thokachichu
 

Awsome..Thanks..Will try it today..I also have microbosh in my setup..Guess,I still need to pass AWS credentials to deploy microbosh..


Re: proposed stemcell network performance tuning

Joshua McKenty <jmckenty@...>
 

Amit - I worry about changes to the former in the context of HTTP 1.0 and 1.1, especially without pipelining. What problem are you trying to solve?

If you’re having trouble initiating new sockets, there are other kernel params we should adjust.

On Sep 29, 2015, at 5:17 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi all,

I'd like to propose tuning a couple kernel parameters related to tcp performance:

# TCP_FIN_TIMEOUT
# This setting determines the time that must elapse before TCP/IP can release a closed connection and reuse
# its resources. During this TIME_WAIT state, reopening the connection to the client costs less than establishing
# a new connection. By reducing the value of this entry, TCP/IP can release closed connections faster, making more
# resources available for new connections. Adjust this in the presence of many connections sitting in the
# TIME_WAIT state:

echo 5 > /proc/sys/net/ipv4/tcp_fin_timeout

# TCP_TW_REUSE
# This allows reusing sockets in TIME_WAIT state for new connections when it is safe from protocol viewpoint.
# Default value is 0 (disabled). It is generally a safer alternative to tcp_tw_recycle

echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse

Currently, these parameters are set by certain jobs in cf-release, diego-release, and perhaps others. Any VM needing to establish a high number of incoming/outgoing tcp connections in a short period of time will be unable to establish new connections without changing these parameters.

We believe these parameters are safe to change across the board, and will be generally beneficial. The existing defaults made sense for much older networks, but can be greatly optimized for modern systems.

Please share with the mailing lists if you have any questions or feedback about this proposal. If you maintain a bosh release and would like to see how these changes would affect your release, you can create a job which simply does the above in its startup scripts, and colocate that job with all the other jobs in a deployment of your release.

Thanks,

Amit Gupta
Cloud Foundry PM, OSS Release Integration team


Re: Command 'deploy' failed when running "bosh-init deploy ./bosh.yml"

Remi Tassing
 

In fact, installing and using Ruby-2.2.3, instead, got me through


Re: Proposal to keep persistent disks around for longer period of time

Aristoteles Neto
 

May I go further and ask that snapshots be kept after bosh delete deployment, but be also garbage collected after retention period? Also asking for a friend, I never accidentally deleted data either.

Aristoteles Neto
dds.neto(a)gmail.com

On 1/10/2015, at 8:45, Dr Nic Williams <drnicwilliams(a)gmail.com> wrote:

Great idea. Can I request that disks are kept in the same garbage collection pool after "bosh delete deployment"? Asking for a friend. I definitely never accidentally deleted production data.




On Wed, Sep 30, 2015 at 3:19 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote:

Summary: We want to avoid accidental data loss so we are thinking to keep persistent disks around after deployment is modified. Persistent disks will be regularly garbage collected and you can potentially reattach disks if necessary.

https://github.com/cloudfoundry/bosh-notes/blob/master/persistent-disk-mgmt.md

Thoughts?

1921 - 1940 of 2761