container cannot communicate with the host
Youzhi Zhu
Hi all
I have an app A and a service B, service B is running on the dea server(ip 10.0.0.254), app A need to connect with service B through tcp, it works normally in my LAN, but when I push A to cf, it cannot connect to B, then I execute bin/wsh to get into the container and ping the host ip, it's unreachable, as below: *root(a)18mkbd9n808:~# ping 10.0.0.254PING 10.0.0.254 (10.0.0.254) 56(84) bytes of data.From 10.0.0.254 icmp_seq=1 Destination Port UnreachableFrom 10.0.0.254 icmp_seq=2 Destination Port Unreachable^C--- 10.0.0.254 ping statistics ---2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1002ms* But if I ping another host in the LAN. it can be reached!!! *root(a)18mkbd9n808:~# ping 10.0.0.253PING 10.0.0.253 (10.0.0.253) 56(84) bytes of data.64 bytes from 10.0.0.253 <http://10.0.0.253>: icmp_seq=1 ttl=63 time=1.60 ms64 bytes from 10.0.0.253 <http://10.0.0.253>: icmp_seq=2 ttl=63 time=0.421 ms^C--- 10.0.0.253 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.421/1.013/1.606/0.593 ms* It's wired!!! my cf-release is cf-175 and I have only one dea server.Does anyone met this situation before? thanks! |
|
Re: container cannot communicate with the host
Lev Berman <lev.berman@...>
As far as I know, it is so by design - in order to setup a connection to
toggle quoted message
Show quoted text
the same host you need to explicitly tell Warden to allow external traffic - https://github.com/cloudfoundry/warden/blob/master/warden/README.md#net-handle-out-addressmaskport . In more details: 1) ssh into your VM with DEA 2) find your Warden handle in /var/vcap/data/dea_ng/db/instances.json - "warden_handle" field for the hash describing your specific application ("application_id" value is the same as cf app --guid) 3) cd into /var/vcap/packages/warden/warden 4) bundle install 5) ./bin/warden --socket /var/vcap/data/warden/warden.sock 6) > net_out --handle <your handle from instances.json> --port <your port to open> This is for CF v208, an earlier version of Warden client may have slightly different API - see command help. On Fri, May 22, 2015 at 10:21 AM, Youzhi Zhu <zhuyouzhi03(a)gmail.com> wrote:
Hi all --
Lev Berman Altoros - Cloud Foundry deployment, training and integration Github *: https://github.com/ldmberman <https://github.com/ldmberman>* |
|
Re: container cannot communicate with the host
Matthew Sykes <matthew.sykes@...>
Warden explicitly disables access to the container host. If you move up to
toggle quoted message
Show quoted text
a more recent level of cf-release, that behavior is configurable with the `allow_host_access` flag. When that flag is true, this line is skipped: https://github.com/cloudfoundry/warden/blob/4f1e5c049a12199fdd1f29cde15c9a786bd5fac8/warden/root/linux/net.sh#L128 At the level you're at, that rule is always specified so you'd have to manually change it. https://github.com/cloudfoundry/warden/blob/17f34e2d7ff1994856a61961210a82e83f24ecac/warden/root/linux/net.sh#L124 On Fri, May 22, 2015 at 3:21 AM, Youzhi Zhu <zhuyouzhi03(a)gmail.com> wrote:
Hi all --
Matthew Sykes matthew.sykes(a)gmail.com |
|
List Reply-To behavior
Matthew Sykes <matthew.sykes@...>
The vcap-dev list used to use a Reply-To header pointing back to the list
such that replying to a post would automatically go back to the list. The current mailman configuration for cf-dev does not set a Reply-To header and the default behavior is to reply to the author. While I understand the pros and cons of setting the Reply-To header, this new behavior has bitten me several times and I've found myself re-posting a response to the list instead of just the author. I'm interested in knowing if anyone else has been bitten by this behavior and would like a Reply-To header added back... Thanks. -- Matthew Sykes matthew.sykes(a)gmail.com |
|
Re: List Reply-To behavior
Daniel Mikusa
On Fri, May 22, 2015 at 6:22 AM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote: The vcap-dev list used to use a Reply-To header pointing back to the list+1 and +1 Dan
|
|
Re: List Reply-To behavior
James Bayer
yes, this has affected me
toggle quoted message
Show quoted text
On Fri, May 22, 2015 at 4:33 AM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:
--
Thank you, James Bayer |
|
Question about services on Cloud Foundry
Kinjal Doshi
Hi,
From the architecture point of view I understand that there are no service explicitly associated with CF. However, the following doc is very confusing: http://docs.cloudfoundry.org/devguide/services/managed.html Would be great if some one can explain the meaning of manages services her. Thanks, Kinjal |
|
Re: cf-release v209 published
Simon Johansson <simon@...>
i wanted to share the great news that the new skinny buildpacks reducedthe size of cf-release from 5.2gb -> 3.5gb! This is great news, good job buildpack team! On Thu, May 21, 2015 at 4:40 PM, James Bayer <jbayer(a)pivotal.io> wrote: skinny buildpacks refer to each buildpack no longer shipping old |
|
Runtime PMC: 2015-05-19 Notes
Eric Malm <emalm@...>
Hi, all,
The Runtime PMC met on Tuesday, 2015-05-19. Permanent notes are available at: https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-05-19-runtime.md and are included below. Best, Eric --- *# Runtime PMC Meeting 2015-05-19* *## Agenda* 1. Current Backlog and Priorities 1. PMC Lifecycle Activities 1. Open Discussion *## Attendees* * Chip Childers, Cloud Foundry Foundation * Matt Sykes, IBM * Atul Kshirsagar, GE * Erik Jasiak, Pivotal * Sree Tummidi, Pivotal * Eric Malm, Pivotal * Shannon Coen, Pivotal * Will Pragnell, Pivotal * Marco Nicosia, Pivotal *## Current Backlog and Priorities* *### Runtime* * Shannon filling in for Dieu this week * support for context-based routing; delivered * investigating query performance * addressing outstanding pull requests * bump to UAA * issues with loggregator in acceptance environment, blocker to cutting stabilization release for collector *### Diego* * ssh access largely done, currently working routing ssh traffic to proxy * performance breadth: completed 50 cell test, investigating bulk processing in jobs that do so * refining CI to improve recording compatible versions of Diego and CF * processing of PRs from Garden and Lattice are prioritized * Stories queued up to investigate securing identified gaps in Diego *### UAA* * 2.2.6, 2.3.0 releases, notes available * upgraded Spring versions * update to JRE expected in v210 of cf-release * more LDAP work, chaining in identity zone: both LDAP and internal authentication can work simultaneously * support for New Relic instrumentation, will appear after v209 * upcoming: * risk assessment of persistent token storage: understand performance implications * starting work on password policy: multi-tenant for default zone and additional zones * OAuth client groups: authorization to manage clients * SAML support * question from Matt Sykes: * would like to discuss IBM PR for UAA DB migration strategy with the team *### Garden* * investigating management of disk quotas * replacing C/Bash code with Go to enable instrumentation, security, and maintainability * planning to remove default VCAP user in Garden *### Lattice* * nearly done with last stories before releasing 0.2.5 * Cisco contributed openstack support * baking deployment automation into published images on some providers * improved documentation for how to install lattice on VMs * next work planned is support for CF-like app lifecycle management (pushing code in addition to docker) *### TCP Router* * building out icebox to reflect inception * question from Matt Sykes: * how to incorporate new project into PMC? IBM parties surprised with announcement at Summit * Chip: inconsistent policy so far; maybe this belongs alongside gorouter in Runtime PMC * working on process for review, discussion of incubating project * Shannon: first step will be to produce proposal, discuss with community *### LAMB* * big rewind project on datadog firehose nozzle: limitation in doppler about size of messages, dropping messages * working to resolve those problems: improving number of concurrent reads, marshaling efficiency * seeing increases in message loss in Runtime environments: may be other source of contention, working with them to resolve * Datadog nozzle work: * looking at developing a Graphite nozzle from community work * will investigate community interest in Graphite support * naming alignment from loggregator to doppler * instrumentation of statsd for larger message sizes, work to phase out collector and NATS in CF * goal is to stream metrics directly to firehose * question from Matt Sykes: story about protobuf protocol proposal * best way to support vm tagging in log messages: distinguish between types of data in log messages * goal would be to improve the implementation: more generic API for message data; understand implications of this change *### Greenhouse* * Accepted code from HP * will get support from Microsoft with regard to interest in entire Microsoft stack *## PMC Lifecycle Activities* None to report. *## Open Discussion* None to report. |
|
Re: Addressing buildpack size
Daniel Mikusa
On Fri, May 8, 2015 at 3:09 PM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:
Hey Dan,This sounds cool. Can't wait to see what you guys come up with here. I've been thinking about the subject a bit, but haven't come up with any great ideas. The first thought that came to mind was a transparent network proxy, like Squid, which would just automatically cache the files as they're accessed. It's nice and simple, nothing with the build pack would need to change or be altered to take advantage of it, but I'm not sure how that would work in a completely offline environments as I'm not sure how you'd seed the cache. Another thought was for the DEA to provided some additional hints to the build packs about how they could locate binaries. Perhaps a special environment variable like CF_BP_REPO=http://repo.system.domain/. The build pack could then take that and use it to generate URLs to it's binary resources. A variation on that would be to check this repo first, and then fall back to some global external repo if available (i.e. most recent stuff is on CF_BP_REPO, older stuff needs Internet access to download). Yet another variation would be for the CF_BP_REPO to start small and grow as things are requested. For example, if you request a file that doesn't exist CF_BP_REPO would try to download it from the Internet, cache it and stream it back to the app. Anyway, I'm just thinking out loud now. Thanks for the update! Dan
|
|
Doppler zoning query
john mcteague <john.mcteague@...>
We map our dea's , dopplers and traffic controllers in 5 logical zones
using the various zone properties of doppler, metron_agent and traffic_controller. This aligns to our physical failure domains in openstack. During a recent load test we discovered that zones 4 and 5 were receiving no load at all, all traffic went to zones 1-3. What would cause this unbalanced distribution? I have a single app running 30 instances and have verified it is evenly balanced across all 5 zones (6 instances in each). I have additionally verified that each logical zone in the bosh yml contains 1 dea, doppler server and traffic controller. Thanks, John |
|
Release Notes for v209
Shannon Coen
https://github.com/cloudfoundry/cf-release/releases/tag/v209
https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209 Best, Shannon Coen Product Manager, Cloud Foundry Pivotal, Inc. |
|
Re: Doppler zoning query
James Bayer
john,
can you say more about "receiving no load at all"? for example, if you restart one of the app instances in zone 4 or zone 5 do you see logs with "cf logs"? you can target a single app instance index to get restarted with using a "cf curl" command for terminating an app index [1]. you can find the details with json output from "cf stats" that should show you the private IPs for the DEAs hosting your app, which should help you figure out which zone each app index is in. http://apidocs.cloudfoundry.org/209/apps/terminate_the_running_app_instance_at_the_given_index.html if you are seeing logs from zone 4 and zone 5, then what might be happening is that for some reason DEAs in zone 4 or zone 5 are not routable somewhere along the path. reasons for that could be: * DEAs in Zone 4 / Zone 5 not getting apps that are hosted there listed in the routing table * The routing table may be correct, but for some reason the routers cannot reach DEAs in zone 4 or zone 5 with outbound traffic and routers fails over to instances in DEAs 1-3 that it can reach * some other mystery On Fri, May 22, 2015 at 2:06 PM, john mcteague <john.mcteague(a)gmail.com> wrote: We map our dea's , dopplers and traffic controllers in 5 logical zones -- Thank you, James Bayer |
|
Release Notes for v210
James Bayer
please note that this release addresses CVE-2015-3202 and CVE-2015-1834 and
we strongly recommend upgrading to this release. more details will be forthcoming after the long united states holiday weekend. https://github.com/cloudfoundry/cf-release/releases/tag/v210 *https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210 <https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>* -- Thank you, James Bayer |
|
USN-2617-1 and CVE-2015-3202 FUSE vulnerability
James Bayer
Severity: High
Vendor: Canonical Ubuntu Vulnerable Versions: Canonical Ubuntu 10.04 and 14.04 CVE References: USN-2617-1, CVE-2015-3202 Description: A privilege escalation vulnerability was identified in a component used in the Cloud Foundry stacks lucid64 and cfliunuxfs2. The FUSE package incorrectly filtered environment variables and could be made to overwrite files as an administrator, allowing a local attacker to gain administrative privileges. Affected Products and Versions: - Cloud Foundry Runtime cf-release versions v183 and all releases through v209 Mitigation: The Cloud Foundry project recommends that Cloud Foundry Runtime Deployments running Release v209 or earlier upgrade to v210 or later. Note that the FUSE package has been removed from the lucid64 stack in the v210 release while it has been patched in the cflinuxfs2 stack (Trusty). Developers should use the cflinuxfs2 stack in order to use FUSE with v210 and higher. Credit: This issue was identified by Tavis Ormandy -- Thank you, James Bayer |
|
CVE-2015-1834 CC Path Traversal vulnerability
James Bayer
Severity: Medium
Vendor: Cloud Foundry Foundation Vulnerable Versions: Cloud Foundry Runtime Releases prior to 208 CVE References: CVE-2015-1834 Description: A path traversal vulnerability was identified in the Cloud Foundry component Cloud Controller. Path traversal is the "outbreak" of a given directory structure through relative file paths in the user input. It aims at accessing files and directories that are stored outside the web root folder, for disallowed reading or even executing arbitrary system commands. An attacker could use a certain parameter of the file path for instance to inject "../" sequences in order to navigate through the file system. In this particular case a remote authenticated attacker can exploit the identified vulnerability in order to upload arbitrary files to the server running a Cloud Controller instance – outside the isolated application container. Affected Products and Versions: Cloud Foundry Runtime cf-release versions v207 or earlier are susceptible to the vulnerability Mitigation: The Cloud Foundry project recommends that Cloud Foundry Runtime Deployments running Release v207 or earlier upgrade to v208 or later. Credit: This issue was identified by Swisscom / SEC Consult -- Thank you, James Bayer |
|
Re: Release Notes for v210
James Bayer
CVE-2015-3202 details:
toggle quoted message
Show quoted text
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000194.html CVE-2015-1834 details: http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000195.html On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io> wrote:
please note that this release addresses CVE-2015-3202 and CVE-2015-1834 --
Thank you, James Bayer |
|
Delivery Status Notification (Failure)
Frank Li <alivedata@...>
Hi,
When I run 'bosh deploy' , I got a error''Error 400007: `uaa_z1/0' is not running after update": *Started preparing configuration > Binding configuration. Done (00:00:04)* *Started updating job ha_proxy_z1 > ha_proxy_z1/0. Done (00:00:13)* *Started updating job nats_z1 > nats_z1/0. Done (00:00:27)* *Started updating job etcd_z1 > etcd_z1/0. Done (00:00:14)* *Started updating job postgres_z1 > postgres_z1/0. Done (00:00:22)* *Started updating job uaa_z1 > uaa_z1/0. Failed: `uaa_z1/0' is not running after update (00:04:02)* *Error 400007: `uaa_z1/0' is not running after update* bosh task 132 --debug *I, [2015-05-22 03:58:56 #2299] [instance_update(uaa_z1/0)] INFO -- DirectorJobRunner: Waiting for 19.88888888888889 seconds to check uaa_z1/0 status* *D, [2015-05-22 03:58:56 #2299] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cf-warden* *D, [2015-05-22 03:59:01 #2299] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cf-warden* *D, [2015-05-22 03:59:06 #2299] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cf-warden* *D, [2015-05-22 03:59:11 #2299] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cf-warden* *I, [2015-05-22 03:59:15 #2299] [instance_update(uaa_z1/0)] INFO -- DirectorJobRunner: Checking if uaa_z1/0 has been updated after 19.88888888888889 seconds* *D, [2015-05-22 03:59:15 #2299] [instance_update(uaa_z1/0)] DEBUG -- DirectorJobRunner: SENT: agent.04446a2b-a103-4a33-9bbe-d8b07d2c6466 {"method":"get_state","arguments":[],"reply_to":"director.2052649d-bafc-4d7a-8184-caa0373ec71f.55816c88-fea4-45cb-a7a9-13d7579b459a"}* *D, [2015-05-22 03:59:15 #2299] [] DEBUG -- DirectorJobRunner: RECEIVED: director.2052649d-bafc-4d7a-8184-caa0373ec71f.55816c88-fea4-45cb-a7a9-13d7579b459a {"value":{"properties":{"logging":{"max_log_file_size":""}},"job":{"name":"uaa_z1","release":"","template":"uaa","version":"e3278da4c650f21c13cfa935814233bc79f156f0","sha1":"c8f3ee66bd955a58f95dbb7c02ca008c5e91ab6a","blobstore_id":"00e2df47-e90f-414d-8965-f97e1ec81b24","templates":[{"name":"uaa","version":"e3278da4c650f21c13cfa935814233bc79f156f0","sha1":"c8f3ee66bd955a58f95dbb7c02ca008c5e91ab6a","blobstore_id":"00e2df47-e90f-414d-8965-f97e1ec81b24"},{"name":"metron_agent","version":"51cf1a4f2e361bc2a2bbd1bee7fa324fe7029589","sha1":"50fccfa5198b0ccd6b39109ec5585f2502011da3","blobstore_id":"beac8dfd-57e9-45c0-8529-56e4c73154bc"},{"name":"consul_agent","version":"6a3b1fe7963fbcc3dea0eab7db337116ba062056","sha1":"54c6a956f7ee1c906e0f8e8aaac13a25584e7d3f","blobstore_id":"aee73914-cf03-4e7c-98a5-a1695cbc2cc5"}]},"packages":{"common":{"name":"common","version":"99c756b71550530632e393f5189220f170a69647.1","sha1":"6da06edd87b2d78e5e0e9848c26cdafe1b3a94eb","blobstore_id":"6783e7af-2366-4142-7199-ac487f359adb"},"consul":{"name":"consul","version":"d828a4735b02229631673bc9cb6aab8e2d56eda5.1","sha1":"15d541d6f0c8708b9af00f045d58d10951755ad6","blobstore_id":"a9256e97-0940-45dc-6003-77141979c976"},"metron_agent":{"name":"metron_agent","version":"122c9dea1f4be749d48bf1203ed0a407b5a2e1ff.1","sha1":"b8241c6482b03f0d010031e5e99cbae4a909ae05","blobstore_id":"8aa07a49-753a-4200-4cbb-cbb554034986"},"ruby-2.1.4":{"name":"ruby-2.1.4","version":"5a4612011cb6b8338d384acc7802367ae5e11003.1","sha1":"032f58346f55ad468c83e015997ff50091a76ef7","blobstore_id":"afaf9c7a-5633-40cc-7a7a-5d285a560b20"},"uaa":{"name":"uaa","version":"05b84acccba5cb31a170d9cad531d22ccb5df8a5.1","sha1":"ae0a7aa73132db192c2800d0094c607a41d56ddb","blobstore_id":"b474ea8d-5c66-4eea-4a7e-689a0cd0de63"}},"configuration_hash":"c1c40387ae387a29bb69124e3d9f741ee50f0d48","networks":{"cf1":{"cloud_properties":{"name":"random"},"default":["dns","gateway"],"dns_record_name":"0.uaa-z1.cf1.cf-warden.bosh","ip":"10.244.0.130","netmask":"255.255.255.252"}},"resource_pool":{"cloud_properties":{"name":"random"},"name":"medium_z1","stemcell":{"name":"bosh-warden-boshlite-ubuntu-lucid-go_agent","version":"64"}},"deployment":"cf-warden","index":0,"persistent_disk":0,"rendered_templates_archive":{"sha1":"2ebf29eac887fb88dab65aeb911a36403c41b1cb","blobstore_id":"38890fbc-f95e-44a9-9f19-859dc42ec381"},"agent_id":"04446a2b-a103-4a33-9bbe-d8b07d2c6466","bosh_protocol":"1","job_state":"failing","vm":{"name":"755410d0-6697-4505-754e-9521d23788ef"},"ntp":{"message":"file missing"}}}* *E, [2015-05-22 03:59:15 #2299] [instance_update(uaa_z1/0)] ERROR -- DirectorJobRunner: Error updating instance: #<Bosh::Director::AgentJobNotRunning: `uaa_z1/0' is not running after update>* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/instance_updater.rb:85:in `update'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:94:in `block (2 levels) in update_instance'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_formatter.rb:49:in `with_thread_name'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:92:in `block in update_instance'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in `call'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in `advance_and_track'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:91:in `update_instance'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:85:in `block (2 levels) in update_instances'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in `call'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in `loop'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in `block in create_thread'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'* *D, [2015-05-22 03:59:15 #2299] [] DEBUG -- DirectorJobRunner: Worker thread raised exception: `uaa_z1/0' is not running after update - /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/instance_updater.rb:85:in `update'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:94:in `block (2 levels) in update_instance'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_formatter.rb:49:in `with_thread_name'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:92:in `block in update_instance'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in `call'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in `advance_and_track'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:91:in `update_instance'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:85:in `block (2 levels) in update_instances'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in `call'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in `loop'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in `block in create_thread'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'* *D, [2015-05-22 03:59:16 #2299] [] DEBUG -- DirectorJobRunner: Thread is no longer needed, cleaning up* *D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner: Shutting down pool* *D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner: (0.004399s) SELECT "stemcells".* FROM "stemcells" INNER JOIN "deployments_stemcells" ON (("deployments_stemcells"."stemcell_id" = "stemcells"."id") AND ("deployments_stemcells"."deployment_id" = 1))* *D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner: Deleting lock: lock:deployment:cf-warden* *D, [2015-05-22 03:59:16 #2299] [] DEBUG -- DirectorJobRunner: Lock renewal thread exiting* *D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner: Deleted lock: lock:deployment:cf-warden* *I, [2015-05-22 03:59:16 #2299] [task:132] INFO -- DirectorJobRunner: sending update deployment error event* *D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner: SENT: hm.director.alert {"id":"7245631b-b6b3-43df-bd43-65b19e23f6ae","severity":3,"title":"director - error during update deployment","summary":"Error during update deployment for cf-warden against Director c6f166bd-ddac-4f7d-9c57-d11c6ad5133b: #<Bosh::Director::AgentJobNotRunning: `uaa_z1/0' is not running after update>","created_at":1432267156}* *E, [2015-05-22 03:59:16 #2299] [task:132] ERROR -- DirectorJobRunner: `uaa_z1/0' is not running after update* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/instance_updater.rb:85:in `update'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:94:in `block (2 levels) in update_instance'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_formatter.rb:49:in `with_thread_name'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:92:in `block in update_instance'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in `call'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in `advance_and_track'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:91:in `update_instance'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:85:in `block (2 levels) in update_instances'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in `call'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in `loop'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in `block in create_thread'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'* */var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'* *D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner: (0.000396s) BEGIN* *D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner: (0.001524s) UPDATE "tasks" SET "state" = 'error', "timestamp" = '2015-05-22 03:59:16.090280+0000', "description" = 'create deployment', "result" = '`uaa_z1/0'' is not running after update', "output" = '/var/vcap/store/director/tasks/132', "checkpoint_time" = '2015-05-22 03:58:52.002311+0000', "type" = 'update_deployment', "username" = 'admin' WHERE ("id" = 132)* *D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner: (0.002034s) COMMIT* *I, [2015-05-22 03:59:16 #2299] [] INFO -- DirectorJobRunner: Task took 5 minutes 55.32297424799998 seconds to process.* uaa section in cf-manifest.yml as following: *uaa:* *admin:* *client_secret: admin-secret* *authentication:* *policy:* *countFailuresWithinSeconds: null* *lockoutAfterFailures: null* *lockoutPeriodSeconds: null* *batch:* *password: batch-password* *username: batch-username* *catalina_opts: -Xmx192m -XX:MaxPermSize=128m* *cc:* *client_secret: cc-secret* *clients:* *app-direct:* *access-token-validity: 1209600* *authorities: app_direct_invoice.write* *authorized-grant-types: authorization_code,client_credentials,password,refresh_token,implicit* *override: true* *redirect-uri: https://console.10.244.0.34.xip.io <https://console.10.244.0.34.xip.io/>* *refresh-token-validity: 1209600* *secret: app-direct-secret* *cc-service-dashboards:* *authorities: clients.read,clients.write,clients.admin* *authorized-grant-types: client_credentials* *scope: openid,cloud_controller_service_permissions.read* *secret: cc-broker-secret* *cloud_controller_username_lookup:* *authorities: scim.userids* *authorized-grant-types: client_credentials* *secret: cloud-controller-username-lookup-secret* *developer_console:* *access-token-validity: 1209600* *authorities: scim.write,scim.read,cloud_controller.read,cloud_controller.write,password.write,uaa.admin,uaa.resource,cloud_controller.admin,billing.admin* *authorized-grant-types: authorization_code,client_credentials* *override: true* *redirect-uri: https://console.10.244.0.34.xip.io/oauth/callback <https://console.10.244.0.34.xip.io/oauth/callback>* *refresh-token-validity: 1209600* *scope: openid,cloud_controller.read,cloud_controller.write,password.write,console.admin,console.support* *secret: console-secret* *doppler:* *authorities: uaa.resource* *override: true* *secret: doppler-secret* *gorouter:* *authorities: clients.read,clients.write,clients.admin,route.admin,route.advertise* *authorized-grant-types: client_credentials,refresh_token* *scope: openid,cloud_controller_service_permissions.read* *secret: gorouter-secret* *login:* *authorities: oauth.login,scim.write,clients.read,notifications.write,critical_notifications.write,emails.write,scim.userids,password.write* *authorized-grant-types: authorization_code,client_credentials,refresh_token* *override: true* *redirect-uri: http://login.10.244.0.34.xip.io <http://login.10.244.0.34.xip.io/>* *scope: openid,oauth.approvals* *secret: login-secret* *notifications:* *authorities: cloud_controller.admin,scim.read* *authorized-grant-types: client_credentials* *secret: notification-secret* *issuer: https://uaa.10.244.0.34.xip.io <https://uaa.10.244.0.34.xip.io/>* *jwt:* *signing_key: |+* *-----BEGIN RSA PRIVATE KEY-----* *MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1* *JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6* *0gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBugspULZVNRxq7veq/fzwIDAQAB* *AoGBAJ8dRTQFhIllbHx4GLbpTQsWXJ6w4hZvskJKCLM/o8R4n+0W45pQ1xEiYKdA* *Z/DRcnjltylRImBD8XuLL8iYOQSZXNMb1h3g5/UGbUXLmCgQLOUUlnYt34QOQm+0* *KvUqfMSFBbKMsYBAoQmNdTHBaz3dZa8ON9hh/f5TT8u0OWNRAkEA5opzsIXv+52J* *duc1VGyX3SwlxiE2dStW8wZqGiuLH142n6MKnkLU4ctNLiclw6BZePXFZYIK+AkE* *xQ+k16je5QJBAN0TIKMPWIbbHVr5rkdUqOyezlFFWYOwnMmw/BKa1d3zp54VP/P8* *+5aQ2d4sMoKEOfdWH7UqMe3FszfYFvSu5KMCQFMYeFaaEEP7Jn8rGzfQ5HQd44ek* *lQJqmq6CE2BXbY/i34FuvPcKU70HEEygY6Y9d8J3o6zQ0K9SYNu+pcXt4lkCQA3h* *jJQQe5uEGJTExqed7jllQ0khFJzLMx0K6tj0NeeIzAaGCQz13oo2sCdeGRHO4aDh* *HH6Qlq/6UOV5wP8+GAcCQFgRCcB+hrje8hfEEefHcFpyKH+5g1Eu1k0mLrxK2zd+* *4SlotYRHgPCEubokb2S1zfZDWIXW3HmggnGgM949TlY=* *-----END RSA PRIVATE KEY-----* *verification_key: |+* *-----BEGIN PUBLIC KEY-----* *MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d* *KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX* *qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug* *spULZVNRxq7veq/fzwIDAQAB* *-----END PUBLIC KEY-----* *ldap: null* *login: null* *no_ssl: true* *restricted_ips_regex: 10\.\d{1,3}\.\d{1,3}\.\d{1,3}|192\.168\.\d{1,3}\.\d{1,3}|169\.254\.\d{1,3}\.\d{1,3}|127\.\d{1,3}\.\d{1,3}\.\d{1,3}|172\.1[6-9]{1}\.\d{1,3}\.\d{1,3}|172\.2[0-9]{1}\.\d{1,3}\.\d{1,3}|172\.3[0-1]{1}\.\d{1,3}\.\d{1,3}* *scim:* *external_groups: null* *userids_enabled: true* *users:* *- admin|admin|scim.write,scim.read,openid,cloud_controller.admin,clients.read,clients.write,doppler.firehose* *spring_profiles: null* *url: https://uaa.10.244.0.34.xip.io <https://uaa.10.244.0.34.xip.io/>* *user: null* *uaadb:* *address: 10.244.0.30* *databases:* *- citext: true* *name: uaadb* *tag: uaa* *db_scheme: postgresql* *port: 5524* *roles:* *- name: uaaadmin* *password: admin* *tag: admin* Can anyone help me ?Thanks! Best Regards, Frank |
|
Re: Question about services on Cloud Foundry
James Bayer
it simply means that there is a Service Broker, and works in conjunction
toggle quoted message
Show quoted text
with the "marketplace" so commands like "cf marketplace", "cf create-service", "cf bind-service" and related all work with the service. user provided services don't show up in the marketplace replated commands and they don't have service plans, but they still work with bind/unbind. On Fri, May 22, 2015 at 7:44 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi, --
Thank you, James Bayer |
|
Re: Doppler zoning query
john mcteague <john.mcteague@...>
I am seeing logs from zone 4 and 5 when tailing the logs (*cf logs
toggle quoted message
Show quoted text
hello-world | grep App | awk '{ print $2 }'*), I see a relatively even balance between all app instances, yet doppler on zones 1-3 consume far greater cpu resources (15x in some cases) than zones 4 and 5. Generally zones 4 and 5 barely get above 1% utilization. Running *cf curl /v2/apps/guid/stats | grep host | sort *shows 30 instances, 6 in each zone, a perfect balance. Each loggregator is running with 8GB RAM and 4vcpus. John On Sat, May 23, 2015 at 11:56 PM, James Bayer <jbayer(a)pivotal.io> wrote:
john, |
|