cf-release v211 is available
The cf-release v211 was released on June 4th, 2015 - IMPORTANT: This release removes lucid64 stack, please ensure apps are migrated prior to upgrade - IMPORTANT: If using the postgres included within cf-release, please carefully read the note below about the upgrade to postgres 9.4.2 Runtime - Remove lucid64 stack completely from cf-release details < https://www.pivotaltracker.com/story/show/95483678> - Please ensure all your applications have migrated to the cflinuxfs2 stack prior to upgrading to this release - Once all apps have been migrated to the new stack, Operators will need to manually delete the lucid64 stack via the cc api using the admin user. - http://apidocs.cloudfoundry.org/211/stacks/delete_a_particular_stack.html - Upgraded postgres included in cf-release to postgres 9.4.2 details < https://www.pivotaltracker.com/story/show/77680398> - See note below about the postgres job upgrade - [Experimental] Work continues on /v3 and Application Process Types details < https://www.pivotaltracker.com/epic/show/1334418> - [Experimental] Work continues on Route API details < https://www.pivotaltracker.com/epic/show/1590160> - [Experimental] Work continues on Context Path Routes details < https://www.pivotaltracker.com/epic/show/1808212> - Work in progress for support of user-provided tags on service instances details < https://www.pivotaltracker.com/epic/show/1879702> - cloudfoundry/cf-release #689 < https://github.com/cloudfoundry/cf-release/pull/689>: Fixing failed cc_ng and cc_ng_worker with NFS details < https://www.pivotaltracker.com/story/show/95602450> - Remove default support address for CC details < https://www.pivotaltracker.com/story/show/92724640> - increased cloud_controller_ng start timeout to be able to run long ccdb migrations details < https://github.com/cloudfoundry/cf-release/commit/ff57572d67c9c0e7e38d9b2298762faba8547727> - cloudfoundry/cf-release #680 < https://github.com/cloudfoundry/cf-release/pull/680>: staticfile to be tested before nodejs/ruby buildpacks details < https://www.pivotaltracker.com/story/show/94594952> - cloudfoundry/stacks #16 < https://github.com/cloudfoundry/cf-release/pull/16>: Add cmake to rootfses details < https://www.pivotaltracker.com/story/show/94022672> - cloudfoundry/stacks #17 < https://github.com/cloudfoundry/cf-release/pull/17>: Add autoconf to rootfs details < https://www.pivotaltracker.com/story/show/94411176> - cloudfoundry/cf-release #682 < https://github.com/cloudfoundry/cf-release/pull/682>: Upgrading ruby buildpack to v1.4.2 details < https://www.pivotaltracker.com/story/show/95087396> - cloudfoundry/cf-release #683 < https://github.com/cloudfoundry/cf-release/pull/683>: Upgrading python buildpack to v1.3.2 details < https://www.pivotaltracker.com/story/show/95163484> - Make 'dea_next.stacks' overridable in the manifest. details < https://www.pivotaltracker.com/story/show/92393276> - cloudfoundry/cf-release #681 < https://github.com/cloudfoundry/cf-release/pull/681>: Add security group for cf-mysql subnets on bosh-lite details < https://www.pivotaltracker.com/story/show/95024316> - cloudfoundry/dea_ng #164 < https://github.com/cloudfoundry/cf-release/pull/164>: Add warden_handle method to staging task details < https://www.pivotaltracker.com/story/show/95427526> - Use MASQUERADE instead of SNAT for container NAT details < https://github.com/cloudfoundry/warden/commit/4f1e5c049a12199fdd1f29cde15c9a786bd5fac8> - Throw better errors for apps stats endpoint details < https://www.pivotaltracker.com/story/show/93268820> - Fix buildpack_cache deletion issue details < https://www.pivotaltracker.com/story/show/95474242> Loggregator - If no Dopplers available in an AZ, Metron will now fail over across AZs. details < https://www.pivotaltracker.com/story/show/86649938> - StatsD support broken out of Metron and into a separate process. New class of items for adding data into metron/loggregator now known as an “injectors." Further info to follow on cf-dev. - details < https://www.pivotaltracker.com/story/show/95065248>. - repo < https://github.com/cloudfoundry/statsd-injector>. - All loggregator metrics now using a Metron /varz shim instead of writing to a local /varz. - Most loggregator metrics will have a different prefix as a result. - All former metrics and new ones are documented - in wiki < https://github.com/cloudfoundry/loggregator/wiki/Loggregator--varz-metrics-page> (scroll right) and in a publicgoogle doc < https://docs.google.com/spreadsheets/d/176yIaJChXEmvm-CjybmwopdRGQfDGrSzo3J_Mx8mMnk/edit?usp=sharing> . - Story details < https://www.pivotaltracker.com/story/show/95539818>. - Other CF Components to follow; docs to be formalized with documentation team. - NOAA client library fixed Close() issue, independent of CF release. Change is backward-incompatible. - details < https://www.pivotaltracker.com/story/show/94103174> | cf-dev announcement < http://lists.cloudfoundry.org/pipermail/cf-dev/2015-June/000316.html> | github diff < https://github.com/cloudfoundry/noaa/commit/0de0770ca632948b6ae49ab28c1c04e260d31bbb> - Removed Dropsonde protocol dependence on gogoproto for non-go builds. details < https://www.pivotaltracker.com/story/show/94688854> - Increase doppler marshal/unmarshal efficiency to compensate for message size changes.details < https://www.pivotaltracker.com/story/show/93439456> - [Bug Fix] Syslog drain binder is no longer leaking connections to cloud_controller. details < https://www.pivotaltracker.com/story/show/93932106> - [Bug Fix] LoggregatorClientPool no longer leaking clients to non-existant dopplers. details < https://www.pivotaltracker.com/story/show/95008094> Used Configuration - BOSH Version: 152 - Stemcell Version: 2969 - CC Api Version: 2.28.0 Commit summary < http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-211-whats-in-the-deploy.html> Compatible Diego Version - final release 0.1281.0 commit < https://github.com/cloudfoundry-incubator/diego-release/commit/fc114972868c3adc544f22860ef77593cb624e64> Postgres Job Upgrade The Postgres Job will upgrade the postgres database to version 9.4.2. Postgres will be unavailable during this upgrade. A copy of the database is made for the upgrade, you may need to adjust the persistent disk capacity of the postgres job. If the upgrade fails: - The old database is still available at /var/vcap/store/postgres - The new database is at /var/vcap/store/postgres-9.4.2 - A marker file is kept at /var/vcap/store/FLAG_POSTGRES_UPGRADE to prevent the upgrade from happening again. - pg_upgrade logs that may have details of why the migration failed can be found in/home/vcap/ To attempt the upgrade again, you should remove /var/vcap/store/postgres-9.4.2 and/var/vcap/store/FLAG_POSTGRES_UPGRADE To rollback to a previous release, you should remove /var/vcap/store/postgres-9.4.2 and/var/vcap/store/FLAG_POSTGRES_UPGRADE. The previous release has no knowledge of these files, but they will conflict if you later try the upgrade again. Post upgrade, both old and new databases are kept. The old database moved to /var/vcap/store/postgres-previous. The postgres-previous directory will be kept until the next postgres upgrade is performed in the future. You are free to remove this if you have verified the new database works and you want to reclaim the space. Manifest and Job Spec Changes - properties.cc.stacks.default lucid64 stack has been removed - properties.dea_next.stacks.default lucid64 stack has been removed https://github.com/cloudfoundry/cf-release/releases/tag/v211
|
|
Re: cf-dev Digest, Vol 3, Issue 18
Hello Supraja,
Did you delete the service broker with `cf delete-service-broker`? Then you registered the broker again with `cf create-service-broker`?
Then you tried to make the service plans public with `cf enable-service-access testService`?
When you ran this last command, are you sure you were an admin? You will receive the error "Service offering testService not found" if you are not authenticated as an admin user.
Best,
Shannon Coen Product Manager, Cloud Foundry Pivotal, Inc.
toggle quoted messageShow quoted text
Date: Wed, 3 Jun 2015 17:18:43 -0700 From: Supraja Yasoda <ykmsupraja(a)gmail.com> To: cf-dev(a)lists.cloudfoundry.org Subject: [cf-dev] Service offering testService not found Message-ID: < CADnEmc45Wckbup+GNBtCm_TRjx-Y7jwXBFk2YkqQZ8YO5RxRLw(a)mail.gmail.com> Content-Type: text/plain; charset="utf-8"
Hi,
I have deleted service broker by removing service instances. I created again but now I am unable to enable-service-access to Service. I get error "Service offering testService not found". When I do get I see catalog gets service Id, name under service definition. Could someone suggest one the same. --
*Regards,* -------------- next part --------------
|
|
Re: Log connections from security groups - bosh lite
i seem to remember something about app security group logging having an issue with bosh-lite that isn't present when you have a DEA in a VM. i remember something about that. i'll see if dieu remembers. On Fri, Jun 5, 2015 at 1:06 PM, Michael < michael.grifalconi(a)studenti.unimi.it> wrote: Hello,
as you suggested, I looked deeper in this matter, and I can see that on the DEA VM:
I get the right iptables rules, but I still can not see the logs on /var/log/messages
[Im using bosh-lite, latest stemcell, CF version 207]
Do you know what should I do to allow this information to be logged?
ref:https://www.pivotaltracker.com/n/projects/966314/stories/90078842
Thank you!
Best regards,
Michael
**************** Per destinare il 5x1000 all'Universita' degli Studi di Milano: indicare nella dichiarazione dei redditi il codice fiscale 80012650158.
http://www.unimi.it/13084.htm?utm_source=firmaMail&utm_medium=email&utm_content=linkFirmaEmail&utm_campaign=5xmille
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
-- Thank you, James Bayer
|
|
Log connections from security groups - bosh lite
Michael Grifalconi <michael.grifalconi@...>
|
|
Re: Runtime PMC - 2015-06-02 notes
Copying contents of notes here. ---- Runtime PMC Meeting 2015-06-02 < https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#agenda> Agenda 1. Proposed Runtime refactor 2. Current Backlog and Priorities 3. PMC Lifecycle Activities 4. Open Discussion < https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#attendees> Attendees - Chip Childers, Cloud Foundry Foundation - Michael Fraenkel, IBM - Matt Sykes, IBM - Steve Winkler, GE - Onsi Fakhouri, Pivotal - Erik Jasiak, Pivotal - Sree Tummidi, Pivotal - Eric Malm, Pivotal - Marco Nicosia, Pivotal - James Bayer, Pivotal < https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#proposed-runtime-refactor>Proposed Runtime refactor [image: runtime-refactor] < https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/runtime-refactor.png> < https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#current-backlogs-and-priorities>Current Backlogs and Priorities < https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#lamb> LAMB - Accelerating effort to move components away from varz. - Plan to work with individual teams to document what the varz metrics actually mean - backwards incompatible change in NOAA library - to be documented and communicated to cf-dev < https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#diego> Diego - Nearing completion of 50 cell experiment - This week working on 100 cells - starting semantic versioning for Diego - Some security stories - progressing on ssh track ** reimplementing scp inside of our daemon ** xtp with cli for ssh plugin ** making sure correct policy is in place for ssh access, configurable at deployment, space, and app level ** Some discussion around policy for removing instances that have been modified < https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#uaa> UAA - UAA 2.3.1 released - includes a bug fix that IBM was interested in - Revokable token strategy - research and POCs - Password policy for multi-tenant zones - Add policy around lockout and expiry - Planning inception next week around token revocation and handling of saml claims < https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#lattice> Lattice - Dropping this week 0.2.5 release - Support for new version terraform - Community contributed openstack support - cli to use Diego’s task functionality, UX is terrible, but functionality is nice. - Community requested, enable monitoring of a url, health status of a url - Massive document scrubbing using github to update the documents - looking at implementing condenser, creating droplets for lattice < https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#runtime> Runtime - Breaking out the GoRouter into a separate CF Routing team, inception on Route Services on Monday. - Making good progress on transitioning CI to concourse. Can see our progress here < https://concourse.runtime-ci.cf-app.com/> - Making good progress on Routing API work. < https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#pmc-lifecycle-activities>PMC Lifecycle Activities - Proposed moving the work on GoRouter into a separate CF Routing team. No objections raised to creating the new team - Proposed Dieu Cao to lead the new CF Routing team. No objections raised to Dieu leading the new team - Tracker for CF Routing < https://www.pivotaltracker.com/n/projects/1358110> - Notes from Route Services inception < https://docs.google.com/a/pivotal.io/document/d/1XYHuOLISd6zIjTJClJpJYz2m_76RFECYUbyuPk7JoqY/edit?usp=sharing>
toggle quoted messageShow quoted text
|
|
Fix Confirmed, for the record, it was the issue when running ./update and failure to retrieve HM9000 git submodules.
--- Armin ranjbar
toggle quoted messageShow quoted text
On Fri, Jun 5, 2015 at 6:51 PM, Armin Ranjbar <zoup(a)zoup.org> wrote: Replying myself :I i think it was due the fact that something was messed up during updating git sub modules before creating release, recreating to make sure it's ok.
--- Armin ranjbar
On Fri, Jun 5, 2015 at 6:28 PM, Armin Ranjbar <zoup(a)zoup.org> wrote:
Hello,
when trying to deploy CF, i get this error during build process of HM9000. CF RELEASE : 210+dev.2 , commit hash: c6f46acd bosh-openstack-kvm-ubuntu-trusty-go_agent v2978
Started compiling packages > hm9000/b27306493cef0f36b94eacb6821ce0e53fc386d6. Failed: Action Failed get_task: Task 57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling package hm9000: Running packaging script: Command exited with 1; Stdout: , Stderr: ++ readlink -nf /var/vcap/packages/golang1.4 + export GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39 + GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39 + export PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin + PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin + export GOPATH=/var/vcap/data/compile/hm9000/hm9000 + GOPATH=/var/vcap/data/compile/hm9000/hm9000 + go install github.com/cloudfoundry/hm9000 hm9000/src/ github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2: cannot find package "github.com/cloudfoundry/yagnats" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/cloudfoundry/yagnats (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/codegangsta/cli hm9000/src/ github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2: cannot find package "github.com/coreos/go-etcd/etcd" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/coreos/go-etcd/etcd (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd (from $GOPATH) hm9000/src/ github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/goji/httpauth hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/nu7hatch/gouuid hm9000/src/ github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/onsi/gomega hm9000/src/ github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/pivotal-golang/lager hm9000/src/ github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/ifrit hm9000/src/ github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2: cannot find package "github.com/tedsuo/ifrit/ginkgomon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/ginkgomon (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/ifrit/ginkgomon (from $GOPATH) hm9000/src/ github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2: cannot find package "github.com/tedsuo/ifrit/grouper" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/grouper (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot find package "github.com/tedsuo/ifrit/http_server" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/http_server (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/ifrit/http_server (from $GOPATH) hm9000/src/ github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2: cannot find package "github.com/tedsuo/ifrit/restart" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/restart (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot find package "github.com/tedsuo/ifrit/sigmon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/sigmon (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/rata (00:05:56)
Error 450001: Action Failed get_task: Task 57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling package hm9000: Running packaging script: Command exited with 1; Stdout: , Stderr: ++ readlink -nf /var/vcap/packages/golang1.4 + export GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39 + GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39 + export PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin + PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin + export GOPATH=/var/vcap/data/compile/hm9000/hm9000 + GOPATH=/var/vcap/data/compile/hm9000/hm9000 + go install github.com/cloudfoundry/hm9000 hm9000/src/ github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2: cannot find package "github.com/cloudfoundry/yagnats" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/cloudfoundry/yagnats (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/codegangsta/cli hm9000/src/ github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2: cannot find package "github.com/coreos/go-etcd/etcd" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/coreos/go-etcd/etcd (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd (from $GOPATH) hm9000/src/ github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/goji/httpauth hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/nu7hatch/gouuid hm9000/src/ github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/onsi/gomega hm9000/src/ github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/pivotal-golang/lager hm9000/src/ github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/ifrit hm9000/src/ github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2: cannot find package "github.com/tedsuo/ifrit/ginkgomon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/ginkgomon (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/ifrit/ginkgomon (from $GOPATH) hm9000/src/ github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2: cannot find package "github.com/tedsuo/ifrit/grouper" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/grouper (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot find package "github.com/tedsuo/ifrit/http_server" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/http_server (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/ifrit/http_server (from $GOPATH) hm9000/src/ github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2: cannot find package "github.com/tedsuo/ifrit/restart" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/restart (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot find package "github.com/tedsuo/ifrit/sigmon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/sigmon (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/rata
--- Armin ranjbar
|
|
Replying myself :I i think it was due the fact that something was messed up during updating git sub modules before creating release, recreating to make sure it's ok.
--- Armin ranjbar
toggle quoted messageShow quoted text
On Fri, Jun 5, 2015 at 6:28 PM, Armin Ranjbar <zoup(a)zoup.org> wrote: Hello,
when trying to deploy CF, i get this error during build process of HM9000. CF RELEASE : 210+dev.2 , commit hash: c6f46acd bosh-openstack-kvm-ubuntu-trusty-go_agent v2978
Started compiling packages > hm9000/b27306493cef0f36b94eacb6821ce0e53fc386d6. Failed: Action Failed get_task: Task 57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling package hm9000: Running packaging script: Command exited with 1; Stdout: , Stderr: ++ readlink -nf /var/vcap/packages/golang1.4 + export GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39 + GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39 + export PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin + PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin + export GOPATH=/var/vcap/data/compile/hm9000/hm9000 + GOPATH=/var/vcap/data/compile/hm9000/hm9000 + go install github.com/cloudfoundry/hm9000 hm9000/src/ github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2: cannot find package "github.com/cloudfoundry/yagnats" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/cloudfoundry/yagnats (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/codegangsta/cli hm9000/src/ github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2: cannot find package "github.com/coreos/go-etcd/etcd" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/coreos/go-etcd/etcd (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd (from $GOPATH) hm9000/src/ github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/goji/httpauth hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/nu7hatch/gouuid hm9000/src/ github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/onsi/gomega hm9000/src/ github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/pivotal-golang/lager hm9000/src/ github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/ifrit hm9000/src/ github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2: cannot find package "github.com/tedsuo/ifrit/ginkgomon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/ginkgomon (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/ginkgomon (from $GOPATH) hm9000/src/ github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2: cannot find package "github.com/tedsuo/ifrit/grouper" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/grouper (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot find package "github.com/tedsuo/ifrit/http_server" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/http_server (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/ifrit/http_server (from $GOPATH) hm9000/src/ github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2: cannot find package "github.com/tedsuo/ifrit/restart" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/restart (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot find package "github.com/tedsuo/ifrit/sigmon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/sigmon (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/rata (00:05:56)
Error 450001: Action Failed get_task: Task 57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling package hm9000: Running packaging script: Command exited with 1; Stdout: , Stderr: ++ readlink -nf /var/vcap/packages/golang1.4 + export GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39 + GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39 + export PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin + PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin + export GOPATH=/var/vcap/data/compile/hm9000/hm9000 + GOPATH=/var/vcap/data/compile/hm9000/hm9000 + go install github.com/cloudfoundry/hm9000 hm9000/src/ github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2: cannot find package "github.com/cloudfoundry/yagnats" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/cloudfoundry/yagnats (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/codegangsta/cli hm9000/src/ github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2: cannot find package "github.com/coreos/go-etcd/etcd" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/coreos/go-etcd/etcd (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd (from $GOPATH) hm9000/src/ github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/goji/httpauth hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/nu7hatch/gouuid hm9000/src/ github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/onsi/gomega hm9000/src/ github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/pivotal-golang/lager hm9000/src/ github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/ifrit hm9000/src/ github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2: cannot find package "github.com/tedsuo/ifrit/ginkgomon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/ginkgomon (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/ginkgomon (from $GOPATH) hm9000/src/ github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2: cannot find package "github.com/tedsuo/ifrit/grouper" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/grouper (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot find package "github.com/tedsuo/ifrit/http_server" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/http_server (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/ifrit/http_server (from $GOPATH) hm9000/src/ github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2: cannot find package "github.com/tedsuo/ifrit/restart" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/restart (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot find package "github.com/tedsuo/ifrit/sigmon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/sigmon (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/rata
--- Armin ranjbar
|
|
Hello,
when trying to deploy CF, i get this error during build process of HM9000. CF RELEASE : 210+dev.2 , commit hash: c6f46acd bosh-openstack-kvm-ubuntu-trusty-go_agent v2978
Started compiling packages > hm9000/b27306493cef0f36b94eacb6821ce0e53fc386d6. Failed: Action Failed get_task: Task 57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling package hm9000: Running packaging script: Command exited with 1; Stdout: , Stderr: ++ readlink -nf /var/vcap/packages/golang1.4 + export GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39 + GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39 + export PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin + PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin + export GOPATH=/var/vcap/data/compile/hm9000/hm9000 + GOPATH=/var/vcap/data/compile/hm9000/hm9000 + go install github.com/cloudfoundry/hm9000 hm9000/src/ github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2: cannot find package "github.com/cloudfoundry/yagnats" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/cloudfoundry/yagnats (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/codegangsta/cli hm9000/src/ github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2: cannot find package "github.com/coreos/go-etcd/etcd" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/coreos/go-etcd/etcd (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd (from $GOPATH) hm9000/src/ github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/goji/httpauth hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/nu7hatch/gouuid hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/onsi/gomega hm9000/src/github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/pivotal-golang/lager hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/ifrit hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2: cannot find package "github.com/tedsuo/ifrit/ginkgomon" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/ginkgomon (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/ginkgomon (from $GOPATH) hm9000/src/ github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2: cannot find package "github.com/tedsuo/ifrit/grouper" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/grouper (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot find package "github.com/tedsuo/ifrit/http_server" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/http_server (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/http_server (from $GOPATH) hm9000/src/ github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2: cannot find package "github.com/tedsuo/ifrit/restart" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/restart (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot find package "github.com/tedsuo/ifrit/sigmon" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/sigmon (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/rata (00:05:56)
Error 450001: Action Failed get_task: Task 57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling package hm9000: Running packaging script: Command exited with 1; Stdout: , Stderr: ++ readlink -nf /var/vcap/packages/golang1.4 + export GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39 + GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39 + export PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin + PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin + export GOPATH=/var/vcap/data/compile/hm9000/hm9000 + GOPATH=/var/vcap/data/compile/hm9000/hm9000 + go install github.com/cloudfoundry/hm9000 hm9000/src/ github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2: cannot find package "github.com/cloudfoundry/yagnats" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/cloudfoundry/yagnats (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/codegangsta/cli hm9000/src/ github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2: cannot find package "github.com/coreos/go-etcd/etcd" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/coreos/go-etcd/etcd (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd (from $GOPATH) hm9000/src/ github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/goji/httpauth hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/nu7hatch/gouuid hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/onsi/gomega hm9000/src/github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/pivotal-golang/lager hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/ifrit hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2: cannot find package "github.com/tedsuo/ifrit/ginkgomon" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/ginkgomon (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/ginkgomon (from $GOPATH) hm9000/src/ github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2: cannot find package "github.com/tedsuo/ifrit/grouper" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/grouper (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot find package "github.com/tedsuo/ifrit/http_server" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/http_server (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/http_server (from $GOPATH) hm9000/src/ github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2: cannot find package "github.com/tedsuo/ifrit/restart" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/restart (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot find package "github.com/tedsuo/ifrit/sigmon" in any of: /var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/ github.com/tedsuo/ifrit/sigmon (from $GOROOT) /var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon (from $GOPATH) hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/ github.com/tedsuo/rata
--- Armin ranjbar
|
|
Re: Staging error: no available stagers (status code: 400, error code: 170001)
I am sorry that I was no help to you
2015-06-05 13:12 GMT+09:00 Guangcai Wang <guangcai.wang(a)gmail.com>:
toggle quoted messageShow quoted text
Finally, it works after I changed the configuration related to memory and disk for dea.
< disk_mb: 2048 ---
disk_mb: 10000 1025c1029 < memory_overcommit_factor: 3 ---
memory_overcommit_factor: 8 1028c1032 < staging_disk_limit_mb: 6144 ---
staging_disk_limit_mb: 4096 [#1] Received on [staging.advertise] : '{"id":"0-2b2f83b4755749aba3c31cc58a69a306","stacks":["lucid64","cflinuxfs2"],"available_memory":8192,"available_disk":20000}' [#2] Received on [staging.advertise] : '{"id":"0-2b2f83b4755749aba3c31cc58a69a306","stacks":["lucid64","cflinuxfs2"],"available_memory":8192,"available_disk":20000}'
After I deployed a simple php-demo application, they are changed to
[#8] Received on [staging.advertise] : '{"id":"0-2b2f83b4755749aba3c31cc58a69a306","stacks":["lucid64","cflinuxfs2"],"available_memory":8064,"available_disk":18976}' [#9] Received on [staging.advertise] : '{"id":"0-2b2f83b4755749aba3c31cc58a69a306","stacks":["lucid64","cflinuxfs2"],"available_memory":8064,"available_disk":18976}'
However, I still cannot understand why my previous configuration led to "Staging error: no available stagers" as the nats messages below said it has enough resource. My php application is only consuming 128M memory and 1G disk. Who can share some insights?
[#41] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}'
ubuntu(a)boshclivm:~/apps/cf-php-demo$ cat manifest.yml --- applications: - name: cf-php-demo memory: 128M instances: 1 host: cf-php-demo path: .
ubuntu(a)boshclivm:~/apps/cf-php-demo$ cf apps name requested state instances memory disk urls cf-php-demo started 1/1 128M 1G cf-php-demo.runmyapp.io
On Thu, Jun 4, 2015 at 5:08 PM, Guangcai Wang <guangcai.wang(a)gmail.com> wrote:
I got the nats message on 'staging.advertise'. It has the enough resource. but it seems not correct. And it also cannot explain the error - Server error, status code: 400, error code: 170001, message: Staging error: no available stagers.
[#41] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}' [#42] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}' [#43] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}' [#44] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}'
+------------------------------------+---------+---------------+---------------+ | Job/index | State | Resource Pool | IPs |
+------------------------------------+---------+---------------+---------------+ | api_worker_z1/0 | running | small_z1 | 100.64.0.23 | | api_z1/0 | running | medium_z1 | 100.64.0.21 | | clock_global/0 | running | medium_z1 | 100.64.0.22 | | etcd_z1/0 | running | medium_z1 | 100.64.1.8 | | ha_proxy_z1/0 | running | router_z1 | 100.64.1.0 | | | | | 137.172.74.90 | | hm9000_z1/0 | running | medium_z1 | 100.64.0.24 | | loggregator_trafficcontroller_z1/0 | running | small_z1 | 100.64.0.27 | | loggregator_z1/0 | running | medium_z1 | 100.64.0.26 | | login_z1/0 | running | medium_z1 | 100.64.0.20 | | nats_z1/0 | running | medium_z1 | 100.64.1.2 | | nfs_z1/0 | running | medium_z1 | 100.64.1.3 | | postgres_z1/0 | running | medium_z1 | 100.64.1.4 | | router_z1/0 | running | router_z1 | 100.64.1.5 | | runner_z1/0 | running | runner_z1 | 100.64.0.25 | | stats_z1/0 | running | small_z1 | 100.64.0.18 | | uaa_z1/0 | running | medium_z1 | 100.64.0.19 |
+------------------------------------+---------+---------------+---------------+
- 100.64.0.25
m1.large | 8GB RAM | 4 VCPU | 20.0GB Disk
92cf66ec-f2e1-4505-bd25-28c02e991535 | m1.large | 8192 | 20 | 20 | | 4 | 1.0 | True
On Thu, Jun 4, 2015 at 11:57 AM, Guangcai Wang <guangcai.wang(a)gmail.com> wrote:
From the source code /var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26, it seems there is no enough for memory or disk.
def stage(&completion_callback) @stager_id = @stager_pool.find_stager(@app.stack.name, staging_task_memory_mb, staging_task_disk_mb) raise Errors::ApiError.new_from_details('StagingError', 'no available stagers') unless @stager_id
However, this is my first app. It should be light. The DEA is using m1.large which is m1.large | 4096 | 20
Anyone has the same error? and any suggestion on manifest or debug tips?
Another question, I want to add more debug information in cloud_controller_ng.log. I tried to add some code in /var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb, but it did not show in the log. How to do?
On Thu, Jun 4, 2015 at 10:14 AM, Guangcai Wang <guangcai.wang(a)gmail.com> wrote:
attached the deployment manifest. This is generated by spiff and then I modified it.
On Thu, Jun 4, 2015 at 12:47 AM, Takeshi Morikawa <moog0814(a)gmail.com> wrote:
Please check the 'staging.advertise' of nats message https://github.com/cloudfoundry/dea_ng#staging
sample command: bundle exec nats-sub -s nats://[nats.user]:[nats.password]@[nats_ipaddress]:[nats.port] 'staging.advertise'
I have one additional request Can you share your bosh deployment manifest?
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: Announcing Experimental support for Asynchronous Service Operations
Duncan Johnston-Watt <duncan.johnstonwatt@...>
+1 -- Duncan Johnston-Watt CEO | Cloudsoft Corporation +44 777 190 2653 | @duncanjw
Sent from my iPhone
toggle quoted messageShow quoted text
On 4 Jun 2015, at 01:05, Onsi Fakhouri <ofakhouri(a)pivotal.io> wrote:
Well done Services API! This is an awesome milestone!
On Wed, Jun 3, 2015 at 5:04 PM, Chip Childers <cchilders(a)cloudfoundry.org> wrote: Awesome news! Long time coming, and it opens up a whole world of additional capabilities for users.
Nice work everyone!
On Jun 4, 2015, at 9:00 AM, Shannon Coen <scoen(a)pivotal.io> wrote:
On behalf of the Services API team, including Dojo participants from IBM and SAP, I'm pleased to announce experimental availability and published documentation for this much-anticipated feature.
As of cf-release v208 and CLI v6.11.1, Cloud Foundry now supports an enhanced service broker integration in support of long-running provisioning, update, and delete operations. This significantly broadens the supported use cases for Cloud Foundry Marketplace Services, and I can't wait to hear what creative things the ecosystem does with it. Provision VMs, orchestrate clusters, install software, move data... yes, your broker can even open support tickets to have those things done manually!
This feature is currently considered experimental, as we'd like you all to review our docs, try out the feature, and give us feedback. We very interested to hear about any confusion in the docs or the UX, and any sticky issues you encounter in implementation. Our goal is for our docs enable a painless, intuitive (can we hope for joyful?) implementation experience.
We have not bumped the broker API yet for this feature. You'll notice that our documentation for the feature is separate from the stable API docs at this point. Once we're confident in the design (we're relying on your feedback!), we'll bump the broker API version, move the docs for asynchronous operations into the stable docs, AND implement support for asynchronous bind/create-key and unbind/delete-key.
Documentation: - http://docs.cloudfoundry.org/services/asynchronous-operations.html - http://docs.cloudfoundry.org/services/api.html Example broker for AWS (contributed by IBM): - http://docs.cloudfoundry.org/services/examples.html - https://github.com/cloudfoundry-samples/go_service_broker Demo of the feature presented at CF Summit 2015: - https://youtu.be/Ij5KSKrAq9Q
tl;dr
Cloud Foundry expects broker responses within 60 seconds. Now a broker can return an immediate response indicating that a provision, update, or delete operation is in progress. Cloud Foundry then returns a similar response to the client, and begins polling the broker for the status of the operation. Users, via API clients, can discover the status of the operation ("in progress", "succeeded", or "failed"), and brokers can provide user-facing messages in response to each poll which are exposed to users (e.g. "VMs provisioned, installing software, 30% complete").
Thank you,
Shannon Coen Product Manager, Cloud Foundry Pivotal, Inc. _______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev _______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev _______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
-- Cloudsoft Corporation Limited, Registered in Scotland No: SC349230. Registered Office: 13 Dryden Place, Edinburgh, EH9 1RP This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer. Internet e-mails are not necessarily secure. Cloudsoft Corporation Limited does not accept responsibility for changes made to this message after it was sent.
Whilst all reasonable care has been taken to avoid the transmission of viruses, it is the responsibility of the recipient to ensure that the onward transmission, opening or use of this message and any attachments will not adversely affect its systems or data. No responsibility is accepted by Cloudsoft Corporation Limited in this regard and the recipient should carry out such virus and other checks as it considers appropriate.
|
|
Re: What ports will be needed to support hm and loggregator
Lev Berman <lev.berman@...>
We found the loggregator was listening on port 3456 and 3457 with upd6.
He also probably listens for udp4 connections. Have you tried to allow udp4 traffic to ports 3456-3457 and check if loggregator collects the logs after it? On Fri, Jun 5, 2015 at 4:17 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote:
We found the loggregator was listening on port 3456 and 3457 with upd6.
udp6 0 0 [::]:3457 [::]:*
But we can’t use ipv6 in our env. So is there any way to force loggregator to use ipv4?
Thanks,
Maggie
*From:* cf-dev-bounces(a)lists.cloudfoundry.org [mailto: cf-dev-bounces(a)lists.cloudfoundry.org] *On Behalf Of *Lev Berman *Sent:* 2015年6月2日 20:05
*To:* Discussions about Cloud Foundry projects and the system overall. *Subject:* Re: [cf-dev] What ports will be needed to support hm and loggregator
Sorry, I've missed your notes about the firewalls you configure for each CF machine - this firewalls is what needs to be configured to accept UDP traffic to ports 3456 and 3457 from any host. vSphere itself will probably allow this traffic without any additional configuration.
On Tue, Jun 2, 2015 at 1:51 PM, Berman Lev <lev.berman(a)altoros.com> wrote:
I have never worked with vSphere, unfortunately. I've googled a bit and found this table which shows which TCP and UDP ports are open by default on vSphere VMs - https://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.security.doc/GUID-ECEA77F5-D38E-4339-9B06-FF9B78E94B68.html. Consult the vSphere documentation to find out how to add UDP 3456 and 3457 ports to this list.
On Tue, Jun 2, 2015 at 1:32 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote:
I deployed my CF on vshpere server.
*From:* cf-dev-bounces(a)lists.cloudfoundry.org [mailto: cf-dev-bounces(a)lists.cloudfoundry.org] *On Behalf Of *Lev Berman *Sent:* 2015年6月2日 18:30
*To:* Discussions about Cloud Foundry projects and the system overall. *Subject:* Re: [cf-dev] What ports will be needed to support hm and loggregator
You have posted your Application Security Groups - http://docs.pivotal.io/pivotalcf/adminguide/app-sec-groups.html. This groups are created and managed by Cloud Foundry.
But the issue here is with security groups configured in your infrastructure - AWS, OpenStack, etc. Which one is your CF deployed on?
On Tue, Jun 2, 2015 at 1:23 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote:
Hi, Lev
Would you please let me know what exactly I should add to my security group? Following are the current configuration.
- name: public_networks
rules:
- protocol: all
destination: 0.0.0.0-9.255.255.255
- protocol: all
destination: 11.0.0.0-169.253.255.255
- protocol: all
destination: 169.255.0.0-172.15.255.255
- protocol: all
destination: 172.32.0.0-192.167.255.255
- protocol: all
destination: 192.169.0.0-255.255.255.255
- name: dns
rules:
- protocol: tcp
destination: 0.0.0.0/0
ports: '53'
- protocol: udp
destination: 0.0.0.0/0
ports: '53'
default_running_security_groups:
- public_networks
- dns
default_staging_security_groups:
- public_networks
- dns
Thanks,
Maggie
*From:* cf-dev-bounces(a)lists.cloudfoundry.org [mailto: cf-dev-bounces(a)lists.cloudfoundry.org] *On Behalf Of *Lev Berman *Sent:* 2015年6月2日 18:16 *To:* Discussions about Cloud Foundry projects and the system overall. *Subject:* Re: [cf-dev] What ports will be needed to support hm and loggregator
Hi,
At least for loggregator to successflly talk to metron agents, you need to add a rule to a security group for your private subnet allowing the ingress UDP traffic through ports 3456 and 3457 from all hosts (0.0.0.0/0). See more about security group rules needed for CF here - http://docs.cloudfoundry.org/deploying/common/security_groups.html.
On Tue, Jun 2, 2015 at 1:04 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote:
Hi,
I am updating my cf env from 172 to 197. But I found some issues after upgrade is done. I couldn’t get the correct running application instance number:
CF_TRACE=true cf apps
…
"running_instances": -1,
…
application started ?/3
Another issue is I can’t get log information from loggregator. “cf logs” showed nothing after I restarted my application.
I think this may be related to our firewall configuration. Because in another environment where no firewall is configured, hm and loggregator work perfectly well. We have firewalls for deas, routers and all other components separately(three firewalls). So would anyone please tell me what ports we should open for deas, routers or other components?
Thanks,
Maggie
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github*: *https://github.com/ldmberman
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github*: *https://github.com/ldmberman
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github*: *https://github.com/ldmberman
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github*: *https://github.com/ldmberman
-- Lev Berman Altoros - Cloud Foundry deployment, training and integration Github *: https://github.com/ldmberman < https://github.com/ldmberman>*
|
|
Runtime PMC - 2015-06-02 notes
|
|
Re: Announcing Experimental support for Asynchronous Service Operations
i'm very happy to see this work delivered as the 60 second limit has come up so often as a pain point in the past. great job to all who contributed!
toggle quoted messageShow quoted text
On Wed, Jun 3, 2015 at 5:05 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io> wrote: Well done Services API! This is an awesome milestone!
On Wed, Jun 3, 2015 at 5:04 PM, Chip Childers <cchilders(a)cloudfoundry.org> wrote:
Awesome news! Long time coming, and it opens up a whole world of additional capabilities for users.
Nice work everyone!
On Jun 4, 2015, at 9:00 AM, Shannon Coen <scoen(a)pivotal.io> wrote:
On behalf of the Services API team, including Dojo participants from IBM and SAP, I'm pleased to announce experimental availability and published documentation for this much-anticipated feature.
As of cf-release v208 and CLI v6.11.1, Cloud Foundry now supports an enhanced service broker integration in support of long-running provisioning, update, and delete operations. This significantly broadens the supported use cases for Cloud Foundry Marketplace Services, and I can't wait to hear what creative things the ecosystem does with it. Provision VMs, orchestrate clusters, install software, move data... yes, your broker can even open support tickets to have those things done manually!
This feature is currently considered experimental, as we'd like you all to review our docs, try out the feature, and give us feedback. We very interested to hear about any confusion in the docs or the UX, and any sticky issues you encounter in implementation. Our goal is for our docs enable a painless, intuitive (can we hope for joyful?) implementation experience.
We have not bumped the broker API yet for this feature. You'll notice that our documentation for the feature is separate from the stable API docs at this point. Once we're confident in the design (we're relying on your feedback!), we'll bump the broker API version, move the docs for asynchronous operations into the stable docs, AND implement support for asynchronous bind/create-key and unbind/delete-key.
Documentation: - http://docs.cloudfoundry.org/services/asynchronous-operations.html - http://docs.cloudfoundry.org/services/api.html Example broker for AWS (contributed by IBM): - http://docs.cloudfoundry.org/services/examples.html - https://github.com/cloudfoundry-samples/go_service_broker Demo of the feature presented at CF Summit 2015: - https://youtu.be/Ij5KSKrAq9Q
tl;dr
Cloud Foundry expects broker responses within 60 seconds. Now a broker can return an immediate response indicating that a provision, update, or delete operation is in progress. Cloud Foundry then returns a similar response to the client, and begins polling the broker for the status of the operation. Users, via API clients, can discover the status of the operation ("in progress", "succeeded", or "failed"), and brokers can provide user-facing messages in response to each poll which are exposed to users (e.g. "VMs provisioned, installing software, 30% complete").
Thank you,
Shannon Coen Product Manager, Cloud Foundry Pivotal, Inc.
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
-- Thank you,
James Bayer
|
|
Re: Staging error: no available stagers (status code: 400, error code: 170001)
Finally, it works after I changed the configuration related to memory and disk for dea. < disk_mb: 2048 --- disk_mb: 10000 1025c1029 < memory_overcommit_factor: 3 --- memory_overcommit_factor: 8 1028c1032 < staging_disk_limit_mb: 6144 --- staging_disk_limit_mb: 4096 [#1] Received on [staging.advertise] : '{"id":"0-2b2f83b4755749aba3c31cc58a69a306","stacks":["lucid64","cflinuxfs2"],"available_memory":8192,"available_disk":20000}' [#2] Received on [staging.advertise] : '{"id":"0-2b2f83b4755749aba3c31cc58a69a306","stacks":["lucid64","cflinuxfs2"],"available_memory":8192,"available_disk":20000}' After I deployed a simple php-demo application, they are changed to [#8] Received on [staging.advertise] : '{"id":"0-2b2f83b4755749aba3c31cc58a69a306","stacks":["lucid64","cflinuxfs2"],"available_memory":8064,"available_disk":18976}' [#9] Received on [staging.advertise] : '{"id":"0-2b2f83b4755749aba3c31cc58a69a306","stacks":["lucid64","cflinuxfs2"],"available_memory":8064,"available_disk":18976}' However, I still cannot understand why my previous configuration led to "Staging error: no available stagers" as the nats messages below said it has enough resource. My php application is only consuming 128M memory and 1G disk. Who can share some insights? [#41] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}' ubuntu(a)boshclivm:~/apps/cf-php-demo$ cat manifest.yml --- applications: - name: cf-php-demo memory: 128M instances: 1 host: cf-php-demo path: . ubuntu(a)boshclivm:~/apps/cf-php-demo$ cf apps name requested state instances memory disk urls cf-php-demo started 1/1 128M 1G cf-php-demo.runmyapp.io On Thu, Jun 4, 2015 at 5:08 PM, Guangcai Wang <guangcai.wang(a)gmail.com> wrote: I got the nats message on 'staging.advertise'. It has the enough resource. but it seems not correct. And it also cannot explain the error - Server error, status code: 400, error code: 170001, message: Staging error: no available stagers.
[#41] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}' [#42] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}' [#43] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}' [#44] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}'
+------------------------------------+---------+---------------+---------------+ | Job/index | State | Resource Pool | IPs |
+------------------------------------+---------+---------------+---------------+ | api_worker_z1/0 | running | small_z1 | 100.64.0.23 | | api_z1/0 | running | medium_z1 | 100.64.0.21 | | clock_global/0 | running | medium_z1 | 100.64.0.22 | | etcd_z1/0 | running | medium_z1 | 100.64.1.8 | | ha_proxy_z1/0 | running | router_z1 | 100.64.1.0 | | | | | 137.172.74.90 | | hm9000_z1/0 | running | medium_z1 | 100.64.0.24 | | loggregator_trafficcontroller_z1/0 | running | small_z1 | 100.64.0.27 | | loggregator_z1/0 | running | medium_z1 | 100.64.0.26 | | login_z1/0 | running | medium_z1 | 100.64.0.20 | | nats_z1/0 | running | medium_z1 | 100.64.1.2 | | nfs_z1/0 | running | medium_z1 | 100.64.1.3 | | postgres_z1/0 | running | medium_z1 | 100.64.1.4 | | router_z1/0 | running | router_z1 | 100.64.1.5 | | runner_z1/0 | running | runner_z1 | 100.64.0.25 | | stats_z1/0 | running | small_z1 | 100.64.0.18 | | uaa_z1/0 | running | medium_z1 | 100.64.0.19 |
+------------------------------------+---------+---------------+---------------+
- 100.64.0.25
m1.large | 8GB RAM | 4 VCPU | 20.0GB Disk
92cf66ec-f2e1-4505-bd25-28c02e991535 | m1.large | 8192 | 20 | 20 | | 4 | 1.0 | True
On Thu, Jun 4, 2015 at 11:57 AM, Guangcai Wang <guangcai.wang(a)gmail.com> wrote:
From the source code /var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26, it seems there is no enough for memory or disk.
def stage(&completion_callback) @stager_id = @stager_pool.find_stager(@app.stack.name, staging_task_memory_mb, staging_task_disk_mb) raise Errors::ApiError.new_from_details('StagingError', 'no available stagers') unless @stager_id
However, this is my first app. It should be light. The DEA is using m1.large which is m1.large | 4096 | 20
Anyone has the same error? and any suggestion on manifest or debug tips?
Another question, I want to add more debug information in cloud_controller_ng.log. I tried to add some code in /var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb, but it did not show in the log. How to do?
On Thu, Jun 4, 2015 at 10:14 AM, Guangcai Wang <guangcai.wang(a)gmail.com> wrote:
attached the deployment manifest. This is generated by spiff and then I modified it.
On Thu, Jun 4, 2015 at 12:47 AM, Takeshi Morikawa <moog0814(a)gmail.com> wrote:
Please check the 'staging.advertise' of nats message https://github.com/cloudfoundry/dea_ng#staging
sample command: bundle exec nats-sub -s nats://[nats.user]:[nats.password]@[nats_ipaddress]:[nats.port] 'staging.advertise'
I have one additional request Can you share your bosh deployment manifest?
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: What ports will be needed to support hm and loggregator
We found the loggregator was listening on port 3456 and 3457 with upd6. udp6 0 0 [::]:3457 [::]:* But we can’t use ipv6 in our env. So is there any way to force loggregator to use ipv4? Thanks, Maggie From: cf-dev-bounces(a)lists.cloudfoundry.org [mailto:cf-dev-bounces(a)lists.cloudfoundry.org] On Behalf Of Lev Berman Sent: 2015年6月2日 20:05 To: Discussions about Cloud Foundry projects and the system overall. Subject: Re: [cf-dev] What ports will be needed to support hm and loggregator Sorry, I've missed your notes about the firewalls you configure for each CF machine - this firewalls is what needs to be configured to accept UDP traffic to ports 3456 and 3457 from any host. vSphere itself will probably allow this traffic without any additional configuration. On Tue, Jun 2, 2015 at 1:51 PM, Berman Lev <lev.berman(a)altoros.com<mailto:lev.berman(a)altoros.com>> wrote: I have never worked with vSphere, unfortunately. I've googled a bit and found this table which shows which TCP and UDP ports are open by default on vSphere VMs - https://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.security.doc/GUID-ECEA77F5-D38E-4339-9B06-FF9B78E94B68.html. Consult the vSphere documentation to find out how to add UDP 3456 and 3457 ports to this list. On Tue, Jun 2, 2015 at 1:32 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com<mailto:xiangyi.meng(a)emc.com>> wrote: I deployed my CF on vshpere server. From: cf-dev-bounces(a)lists.cloudfoundry.org<mailto:cf-dev-bounces(a)lists.cloudfoundry.org> [mailto:cf-dev-bounces(a)lists.cloudfoundry.org<mailto:cf-dev-bounces(a)lists.cloudfoundry.org>] On Behalf Of Lev Berman Sent: 2015年6月2日 18:30 To: Discussions about Cloud Foundry projects and the system overall. Subject: Re: [cf-dev] What ports will be needed to support hm and loggregator You have posted your Application Security Groups - http://docs.pivotal.io/pivotalcf/adminguide/app-sec-groups.html. This groups are created and managed by Cloud Foundry. But the issue here is with security groups configured in your infrastructure - AWS, OpenStack, etc. Which one is your CF deployed on? On Tue, Jun 2, 2015 at 1:23 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com<mailto:xiangyi.meng(a)emc.com>> wrote: Hi, Lev Would you please let me know what exactly I should add to my security group? Following are the current configuration. - name: public_networks rules: - protocol: all destination: 0.0.0.0-9.255.255.255 - protocol: all destination: 11.0.0.0-169.253.255.255 - protocol: all destination: 169.255.0.0-172.15.255.255 - protocol: all destination: 172.32.0.0-192.167.255.255 - protocol: all destination: 192.169.0.0-255.255.255.255 - name: dns rules: - protocol: tcp destination: 0.0.0.0/0< http://0.0.0.0/0> ports: '53' - protocol: udp destination: 0.0.0.0/0< http://0.0.0.0/0> ports: '53' default_running_security_groups: - public_networks - dns default_staging_security_groups: - public_networks - dns Thanks, Maggie From: cf-dev-bounces(a)lists.cloudfoundry.org<mailto:cf-dev-bounces(a)lists.cloudfoundry.org> [mailto:cf-dev-bounces(a)lists.cloudfoundry.org<mailto:cf-dev-bounces(a)lists.cloudfoundry.org>] On Behalf Of Lev Berman Sent: 2015年6月2日 18:16 To: Discussions about Cloud Foundry projects and the system overall. Subject: Re: [cf-dev] What ports will be needed to support hm and loggregator Hi, At least for loggregator to successflly talk to metron agents, you need to add a rule to a security group for your private subnet allowing the ingress UDP traffic through ports 3456 and 3457 from all hosts (0.0.0.0/0< http://0.0.0.0/0>). See more about security group rules needed for CF here - http://docs.cloudfoundry.org/deploying/common/security_groups.html. On Tue, Jun 2, 2015 at 1:04 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com<mailto:xiangyi.meng(a)emc.com>> wrote: Hi, I am updating my cf env from 172 to 197. But I found some issues after upgrade is done. I couldn’t get the correct running application instance number: CF_TRACE=true cf apps … "running_instances": -1, … application started ?/3 Another issue is I can’t get log information from loggregator. “cf logs” showed nothing after I restarted my application. I think this may be related to our firewall configuration. Because in another environment where no firewall is configured, hm and loggregator work perfectly well. We have firewalls for deas, routers and all other components separately(three firewalls). So would anyone please tell me what ports we should open for deas, routers or other components? Thanks, Maggie -- Lev Berman Altoros - Cloud Foundry deployment, training and integration Github: https://github.com/ldmberman-- Lev Berman Altoros - Cloud Foundry deployment, training and integration Github: https://github.com/ldmberman-- Lev Berman Altoros - Cloud Foundry deployment, training and integration Github: https://github.com/ldmberman-- Lev Berman Altoros - Cloud Foundry deployment, training and integration Github: https://github.com/ldmberman
|
|
Re: Staging error: no available stagers (status code: 400, error code: 170001)
toggle quoted messageShow quoted text
From the source code /var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26, it seems there is no enough for memory or disk.
def stage(&completion_callback) @stager_id = @stager_pool.find_stager(@app.stack.name, staging_task_memory_mb, staging_task_disk_mb) raise Errors::ApiError.new_from_details('StagingError', 'no available stagers') unless @stager_id
However, this is my first app. It should be light. The DEA is using m1.large which is m1.large | 4096 | 20
Anyone has the same error? and any suggestion on manifest or debug tips?
Another question, I want to add more debug information in cloud_controller_ng.log. I tried to add some code in /var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb, but it did not show in the log. How to do?
On Thu, Jun 4, 2015 at 10:14 AM, Guangcai Wang <guangcai.wang(a)gmail.com> wrote:
attached the deployment manifest. This is generated by spiff and then I modified it.
On Thu, Jun 4, 2015 at 12:47 AM, Takeshi Morikawa <moog0814(a)gmail.com> wrote:
Please check the 'staging.advertise' of nats message https://github.com/cloudfoundry/dea_ng#staging
sample command: bundle exec nats-sub -s nats://[nats.user]:[nats.password]@[nats_ipaddress]:[nats.port] 'staging.advertise'
I have one additional request Can you share your bosh deployment manifest?
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: UAA : Is anyone utilizing the Password Score Feature
On the Password Score feature, I haven't yet received any updates on whether its being used at all. Please let us know if anyone is using the same. Thank you Nick/Steve/Josh for the feedback !! I agree with the approach of having Min Special Chars and specifying the allowed special chars. We are following the OWASP model. The list of allowed characters is here < https://www.owasp.org/index.php/Password_special_characters> I will update the policy requirements on my side. -Sree On Wed, Jun 3, 2015 at 12:39 PM, Winkler, Steve (GE Global Research) < steve.winkler(a)ge.com> wrote: +1
From: Nicholas Calugar <ncalugar(a)pivotal.io<mailto:ncalugar(a)pivotal.io>> Reply-To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto: cf-dev(a)lists.cloudfoundry.org>> Date: Wednesday, June 3, 2015 at 12:20 PM To: "Discussions about Cloud Foundry projects and the system overall." < cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>> Cc: CF Developers Mailing List <cf-dev(a)lists.cloudfoundry.org<mailto: cf-dev(a)lists.cloudfoundry.org>> Subject: Re: [cf-dev] UAA : Is anyone utilizing the Password Score Feature
Hi Sree,
Not sure if this is possible, but maybe instead of requireAtLeastOneSpecialCharacter boolean, you could do minSpecialCharacters int (0-n)? This would allow more rigorous password policies.
Nick
— Nicholas Calugar
On Wed, Jun 3, 2015 at 12:00 PM, Sree Tummidi <stummidi(a)pivotal.io<mailto: stummidi(a)pivotal.io>> wrote:
Hi All,
The UAA team is in the process of implementing Password Policy feature< https://urldefense.proofpoint.com/v2/url?u=https-3A__www.pivotaltracker.com_story_show_82182984&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=_wh20YK4sGow4AtgdhZx-n4fIJ4x2UiApoSSG8jVOCs&e=> for users stored in UAA. The following properties around password strength will be exposed in the YML configuration.
#passwordPolicy: # minLength: 8 # requireAtLeastOneSpecialCharacter: true # requireAtLeastOneUppercaseCharacter: true # requireAtLeastOneLowercaseCharacter: true # requireAtLeastOneDigit: true
The Password Policy feature is being implemented to support multi-tenant UAA. Each Tenant/Identity Zone will get its own password policy. The password policy for the default zone will be configurable via YML.
UAA currently supports the zxcvbn< https://urldefense.proofpoint.com/v2/url?u=https-3A__blogs.dropbox.com_tech_2012_04_zxcvbn-2Drealistic-2Dpassword-2Dstrength-2Destimation_&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=b9G7EEOsCOiXnLJMJTaDbWyjwr386z7IQ5_5wvRZ6ew&e=> style password score. This is currently exposed via the following properties in the YML configuration file. There is an end point< https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_cloudfoundry_uaa_blob_master_docs_UAA-2DAPIs.rst-23query-2Dthe-2Dstrength-2Dof-2Da-2Dpassword-2Dpost-2Dpassword-2Dscore&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=JO1Yuq0GHq5FoW8uEHIMP-UNRnynikwtdSksZ0gklXk&e=> for querying the status of the same.
password-policy:
required-score: <int>
We would like to understand if this password score feature is being utilized at all. We don't plan on making this feature multi-tenant and would like to drop this in favor of the new approach which is much more granular and supports multi tenancy.
Thanks, Sree Tummidi Sr. Product Manager Identity - Pivotal Cloud Foundry
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: Release Notes for v210

Guillaume Berche
Joseph,
Double checking after a good night of sleep, my last check was wrong (reviewing the diff, I had checked for presence of jobs properties, and missed at the bottom of the file for the presence of ``metron_agent.deployment`` within the top level ``properties``).
So indeed, the root cause of my issue was a indeed lack of "git submodule update" which had left cf-release/templates/cf-lamb.yml outdated.
Sorry for the noise and extra work involved reviewing this. Thanks again for your help and your prompt merge of the nfs template issue.
Guillaume.
toggle quoted messageShow quoted text
On Thu, Jun 4, 2015 at 1:42 AM, CF Runtime <cfruntime(a)gmail.com> wrote: Guillaume,
We run the pipelines using the Docker image built from cf-release/pipeline-image/Dockerfile, which checks out the spiff repo and builds it, so it should be 1.0.6 since that seems to be where master is currently.
Which SHA do you have checked out for cf-release/src/loggregator?
Do you see:
metron_agent: deployment: (( meta.environment ))
at the bottom of cf-release/templates/cf-lamb.yml?
Joseph Palermo CF Runtime Team
On Wed, Jun 3, 2015 at 1:17 PM, Guillaume Berche <bercheg(a)gmail.com> wrote:
Joseph,
I just checked, and I indeed still reproduce the issue against the cf-release v210 branch with the submodule properly updated (including loggregator).
What other info could be useful to diagnose the root cause and environement difference with the cf runtime pipeline ? Are the pipeline indeed using latest released spiff version (1.0.6 [8]) ?
Guillaume.
[8] https://github.com/cloudfoundry-incubator/spiff/releases/tag/v1.0.6
On Wed, Jun 3, 2015 at 9:46 PM, Guillaume Berche <bercheg(a)gmail.com> wrote:
Hi Joseph,
Thanks for your prompt response and the details over the current infrastructures covered by runtime pipelines. Great to hear the nfs template will be merged soon, thanks!
I'm indeed using the generate_deployment_manifest from cf-release, and was still experiencing issue described into [5], until I patched both cf-release/templates/cf-lamb.yml (which happens to belong to loggregator repo) and cf-jobs.yml as in [2].
I'll double check tomorrow if I could have be caught by a transient lack of "git submodule update", which could have explained the problem on my side. If this is the case, then I'm sorry for the noise, and the extra associated work.
Regards,
Guillaume.
[2] https://github.com/cloudfoundry/cf-release/pull/696 [5] https://github.com/cloudfoundry/cf-release/issues/690 [7] https://github.com/cloudfoundry/bosh-lite/issues/265
On Wed, Jun 3, 2015 at 7:50 PM, CF Runtime <cfruntime(a)gmail.com> wrote:
Hi Guillaume,
The metron_agent.deployment default can be found in cf-release/templates/cf-lamb.yml which should get merged automatically if using the generate_deployment_manifest script in cf-release.
We do currently have pipelines for all supported environments (AWS, vSphere, OpenStack, and BoshLite)
Spiff templates are still the recommended way of deploying cf-release, and I would expect the nfs template change to be merged today as it is near the top of our backlog.
Joseph Palermo CF Runtime Team
On Wed, Jun 3, 2015 at 7:32 AM, Guillaume Berche <bercheg(a)gmail.com> wrote:
Hi,
Thanks for the v210 announcement and the associated release note. It seems that the v209-announced introduction of a new mandatory metron_agent.deployment property did not make it into the default spiff templates [5]. Note I tried updating v209 release note formatting to make this more explicit [6].
I'm wondering whether the pivotal runtime/release team has a cf-release pipeline for vsphere infrastructure (I'm suspecting the aws-based pipelines were fine) ? Is such pipeline using the spiff templates into cf-release/templates [4], or has it moved to something else such as cf-boshworkspace [3] ?
If the spiff templates templates into cfrelease/templates are still the recommended way of deploying CF, is there a way to priorize the merge of PRs for known issues in v211 such as [1] and [2], as to avoid the need by the cf-community to maintain its own fork of cfrelease/templates ?
Thanks in advance,
Guillaume.
[1] https://github.com/cloudfoundry/cf-release/pull/689 [2] https://github.com/cloudfoundry/cf-release/pull/696 [3] https://github.com/cloudfoundry-community/cf-boshworkspace [4] https://github.com/cloudfoundry/cf-release/tree/master/templates [5] https://github.com/cloudfoundry/cf-release/issues/690 [6] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209/fdae17795c61691f96f90cc9fd7be90945252937
On Wed, May 27, 2015 at 7:59 AM, Dieu Cao <dcao(a)pivotal.io> wrote:
The cf-release v210 was released on May 23rd, 2015 Runtime
- Addressed USN-2617-1 <http://www.ubuntu.com/usn/usn-2617-1/> CVE-2015-3202 <http://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-3202.html> FUSE vulnerabilities - Removed fuse binaries from lucid64 rootfs . Apps running on lucid64 stack requiring fuse should switch to cflinuxfs2 details <https://www.pivotaltracker.com/story/show/95186578> - fuse binaries updated on cflinuxfs2 rootfs. details <https://www.pivotaltracker.com/story/show/95177810> - [Experimental] Work continues on support for Asynchronous Service Instance Operationsdetails <https://www.pivotaltracker.com/epic/show/1561148> - Support for configurable max polling duration - [Experimental] Work continues on /v3 and Application Process Types details <https://www.pivotaltracker.com/epic/show/1334418> - [Experimental] Work continues on Route API details <https://www.pivotaltracker.com/epic/show/1590160> - [Experimental] Work continues on Context Path Routes details <https://www.pivotaltracker.com/epic/show/1808212> - Work continues on support for Service Keys details <https://www.pivotaltracker.com/epic/show/1743366> - Upgrade etcd server to 2.0.1 details <https://www.pivotaltracker.com/story/show/91070214> - Should be run as 1 node (for small deployments) or 3 nodes spread across zones (for HA) - Also upgrades hm9k dependencies. LAMB client to be upgraded in a subsequent release. Older client is compatible. - cloudfoundry/cf-release #670 <https://github.com/cloudfoundry/cf-release/pull/670>: Be able to specify timeouts for acceptance tests without defaults in the spec. details <https://www.pivotaltracker.com/story/show/93914198> - Fix bug where ssl enabled routers were not draining properly details <https://www.pivotaltracker.com/story/show/94718480> - cloudfoundry/cloud_controller_ng #378 <https://github.com/cloudfoundry/cf-release/pull/378>: current usage against the org quota details <https://www.pivotaltracker.com/story/show/94171010>
UAA
- Bumped to UAA 2.3.0 details <https://github.com/cloudfoundry/uaa/releases/tag/2.3.0>
Used Configuration
- BOSH Version: 152 - Stemcell Version: 2889 - CC Api Version: 2.27.0
Commit summary <http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-210-whats-in-the-deploy.html> Compatible Diego Version
- final release 0.1247.0 commit <https://github.com/cloudfoundry-incubator/diego-release/commit/a122a78eeb344bbfc90b7bcd0fa987d08ef1a5d1>
Manifest and Job Spec Changes
- properties.acceptance_tests.skip_regex added - properties.app_ssh.host_key_fingerprint added - properties.app_ssh.port defaults to 2222 - properties.uaa.newrelic added - properties.login.logout.redirect.parameter.whitelist
On Sat, May 23, 2015 at 9:50 PM, James Bayer <jbayer(a)pivotal.io> wrote:
CVE-2015-3202 details: http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000194.html
CVE-2015-1834 details: http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000195.html
On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io> wrote:
please note that this release addresses CVE-2015-3202 and CVE-2015-1834 and we strongly recommend upgrading to this release. more details will be forthcoming after the long united states holiday weekend.
https://github.com/cloudfoundry/cf-release/releases/tag/v210
*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210 <https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*
-- Thank you,
James Bayer
-- Thank you,
James Bayer
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: Staging error: no available stagers (status code: 400, error code: 170001)
I got the nats message on 'staging.advertise'. It has the enough resource. but it seems not correct. And it also cannot explain the error - Server error, status code: 400, error code: 170001, message: Staging error: no available stagers. [#41] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}' [#42] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}' [#43] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}' [#44] Received on [staging.advertise] : '{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}' +------------------------------------+---------+---------------+---------------+ | Job/index | State | Resource Pool | IPs | +------------------------------------+---------+---------------+---------------+ | api_worker_z1/0 | running | small_z1 | 100.64.0.23 | | api_z1/0 | running | medium_z1 | 100.64.0.21 | | clock_global/0 | running | medium_z1 | 100.64.0.22 | | etcd_z1/0 | running | medium_z1 | 100.64.1.8 | | ha_proxy_z1/0 | running | router_z1 | 100.64.1.0 | | | | | 137.172.74.90 | | hm9000_z1/0 | running | medium_z1 | 100.64.0.24 | | loggregator_trafficcontroller_z1/0 | running | small_z1 | 100.64.0.27 | | loggregator_z1/0 | running | medium_z1 | 100.64.0.26 | | login_z1/0 | running | medium_z1 | 100.64.0.20 | | nats_z1/0 | running | medium_z1 | 100.64.1.2 | | nfs_z1/0 | running | medium_z1 | 100.64.1.3 | | postgres_z1/0 | running | medium_z1 | 100.64.1.4 | | router_z1/0 | running | router_z1 | 100.64.1.5 | | runner_z1/0 | running | runner_z1 | 100.64.0.25 | | stats_z1/0 | running | small_z1 | 100.64.0.18 | | uaa_z1/0 | running | medium_z1 | 100.64.0.19 | +------------------------------------+---------+---------------+---------------+ - 100.64.0.25 m1.large | 8GB RAM | 4 VCPU | 20.0GB Disk 92cf66ec-f2e1-4505-bd25-28c02e991535 | m1.large | 8192 | 20 | 20 | | 4 | 1.0 | True On Thu, Jun 4, 2015 at 11:57 AM, Guangcai Wang <guangcai.wang(a)gmail.com> wrote: From the source code /var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26, it seems there is no enough for memory or disk.
def stage(&completion_callback) @stager_id = @stager_pool.find_stager(@app.stack.name, staging_task_memory_mb, staging_task_disk_mb) raise Errors::ApiError.new_from_details('StagingError', 'no available stagers') unless @stager_id
However, this is my first app. It should be light. The DEA is using m1.large which is m1.large | 4096 | 20
Anyone has the same error? and any suggestion on manifest or debug tips?
Another question, I want to add more debug information in cloud_controller_ng.log. I tried to add some code in /var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb, but it did not show in the log. How to do?
On Thu, Jun 4, 2015 at 10:14 AM, Guangcai Wang <guangcai.wang(a)gmail.com> wrote:
attached the deployment manifest. This is generated by spiff and then I modified it.
On Thu, Jun 4, 2015 at 12:47 AM, Takeshi Morikawa <moog0814(a)gmail.com> wrote:
Please check the 'staging.advertise' of nats message https://github.com/cloudfoundry/dea_ng#staging
sample command: bundle exec nats-sub -s nats://[nats.user]:[nats.password]@[nats_ipaddress]:[nats.port] 'staging.advertise'
I have one additional request Can you share your bosh deployment manifest?
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: getting the cf version from the api
There's not currently a way to determine this exactly. As Takeshi suggests the api version is the closest thing, but it does not get bumped every cf-release.
-Dieu CF Runtime PM
toggle quoted messageShow quoted text
|
|