Announcing cf-mysql-release v20
Marco Nicosia
Hi There!
On behalf of the CF Core Services team I am pleased to announce v20 of
cf-mysql-release <https://github.com/cloudfoundry/cf-mysql-release>.
--
cf-mysql-release <https://github.com/cloudfoundry/cf-mysql-release> is a
BOSH release that delivers a database-as-a-service for Cloud Foundry users.
Through Cloud Foundry, users can provision databases and deliver unique
credentials to bound applications.
v20 expands on the AWS support introduced in v18. (v19 was a small update
release that hardly merited an e-mail to this list.) The major changes in
v20 are that we've restructured the templates used to generate BOSH
manifests for Amazon Web Services. These templates now split the three
major functions of our software, proxy, mysql and the service broker,
across separate AWS Availability Zones. The mysql job has been split to run
across three AZs, one instance each. Proxy and Service Broker are deployed
in an HA configuration across two Availability Zones.
If you've already deployed a version of cf-mysql-release on AWS, you'll
need to follow the upgrade instructions very carefully to re-deploy across
zones while retaining your data. Please see the release notes for v20
<https://github.com/cloudfoundry/cf-mysql-release/releases/tag/v20>.
A neat side-effect of making this change is that we've given each job its
own Bosh resource pool
<https://bosh.io/docs/deployment-manifest.html#resource-pools> per Availability
Zone! With the new templates, the Operator can specify different instance
types per job. We've changed the defaults to save AWS users a little cash:
MySQL instances remain m3.larges while Brokers and Proxies drop down to
m3.mediums. You can find documentation on how to modify your instance sizes
on the deployment resources
<https://github.com/cloudfoundry/cf-mysql-release/blob/v20/docs/deployment-resources.md>
page.
Amidst other upgrades and improvements (we re-wrote the Quota Enforcer),
we've also upgraded MariaDB to version 10.0.17, Galera to 25.3.9, and
Xtrabackup to 2.2.10. Note that in combination with Bosh, MariaDB Galera
cluster upgrades are performed without downtime
<http://galeracluster.com/documentation-webpages/upgrading.html#id1>!
If you'd like to know more, detailed release notes can be found on GitHub
<https://github.com/cloudfoundry/cf-mysql-release/releases/tag/v20> and
bosh.io
<http://bosh.io/releases/github.com/cloudfoundry/cf-mysql-release?version=20>.
If you'd like to know even more than that, each line contains a link to the
original story under which a feature was commissioned.
As always, we'd love to hear if you're having any problems with this
version of the software. Please open a GitHub issue
<https://github.com/cloudfoundry/cf-mysql-release/issues>. If you're
willing, we're always very happy to receive a Pull Request
<https://github.com/cloudfoundry/cf-mysql-release/pulls>.
--
Marco Nicosia
Product Manager
Pivotal Software, Inc.
mnicosia(a)pivotal.io
c: 650-796-2948
On behalf of the CF Core Services team I am pleased to announce v20 of
cf-mysql-release <https://github.com/cloudfoundry/cf-mysql-release>.
--
cf-mysql-release <https://github.com/cloudfoundry/cf-mysql-release> is a
BOSH release that delivers a database-as-a-service for Cloud Foundry users.
Through Cloud Foundry, users can provision databases and deliver unique
credentials to bound applications.
v20 expands on the AWS support introduced in v18. (v19 was a small update
release that hardly merited an e-mail to this list.) The major changes in
v20 are that we've restructured the templates used to generate BOSH
manifests for Amazon Web Services. These templates now split the three
major functions of our software, proxy, mysql and the service broker,
across separate AWS Availability Zones. The mysql job has been split to run
across three AZs, one instance each. Proxy and Service Broker are deployed
in an HA configuration across two Availability Zones.
If you've already deployed a version of cf-mysql-release on AWS, you'll
need to follow the upgrade instructions very carefully to re-deploy across
zones while retaining your data. Please see the release notes for v20
<https://github.com/cloudfoundry/cf-mysql-release/releases/tag/v20>.
A neat side-effect of making this change is that we've given each job its
own Bosh resource pool
<https://bosh.io/docs/deployment-manifest.html#resource-pools> per Availability
Zone! With the new templates, the Operator can specify different instance
types per job. We've changed the defaults to save AWS users a little cash:
MySQL instances remain m3.larges while Brokers and Proxies drop down to
m3.mediums. You can find documentation on how to modify your instance sizes
on the deployment resources
<https://github.com/cloudfoundry/cf-mysql-release/blob/v20/docs/deployment-resources.md>
page.
Amidst other upgrades and improvements (we re-wrote the Quota Enforcer),
we've also upgraded MariaDB to version 10.0.17, Galera to 25.3.9, and
Xtrabackup to 2.2.10. Note that in combination with Bosh, MariaDB Galera
cluster upgrades are performed without downtime
<http://galeracluster.com/documentation-webpages/upgrading.html#id1>!
If you'd like to know more, detailed release notes can be found on GitHub
<https://github.com/cloudfoundry/cf-mysql-release/releases/tag/v20> and
bosh.io
<http://bosh.io/releases/github.com/cloudfoundry/cf-mysql-release?version=20>.
If you'd like to know even more than that, each line contains a link to the
original story under which a feature was commissioned.
As always, we'd love to hear if you're having any problems with this
version of the software. Please open a GitHub issue
<https://github.com/cloudfoundry/cf-mysql-release/issues>. If you're
willing, we're always very happy to receive a Pull Request
<https://github.com/cloudfoundry/cf-mysql-release/pulls>.
--
Marco Nicosia
Product Manager
Pivotal Software, Inc.
mnicosia(a)pivotal.io
c: 650-796-2948
(No subject)
Daniel Mikusa
On Mon, Jun 8, 2015 at 12:03 PM, Arbi Akhina <arbi.akhina(a)gmail.com> wrote:
succession and results in the log entries being missed. If you add a
couple second pause into the app, it will give the system enough time to
attach the logger and you'll see the output generated by the crashing app.
You can do this with a custom start command `cf push -c 'sleep 2 &&
<normal-cmd>'` or with a `.profile.d` script that sleeps for a couple
seconds.
Dan
I'm trying to push an executable JAR to a bosh-lite instance, on the logsThere's a known issue that occurs when an app starts and fails in rapid
I see CF trying many times to restart the app and eventually fail.
I can't find out why the app crashes as there is no app logs returned by
CF. I tried to remotely debug the app (as described in [1]) but nothing
happens on eclipse. Any hint to solve this issue is appreciated.
succession and results in the log entries being missed. If you add a
couple second pause into the app, it will give the system enough time to
attach the logger and you'll see the output generated by the crashing app.
You can do this with a custom start command `cf push -c 'sleep 2 &&
<normal-cmd>'` or with a `.profile.d` script that sleeps for a couple
seconds.
Dan
1. I'm launching the app with:
*cf push -t 180*
2. Here is the manifest.yml content:
---
applications:
- name: modules
memory: 4G
instances: 1
host: modules-${random-word}
path: modules.jar
buildpack: https://github.com/cloudfoundry/java-buildpack
env:
JAVA_OPTS: "$JAVA_OPTS -agentlib:jdwp=transport=dt_socket,address=
192.168.2.8:8000"
3. Here is the log:
Connecté, le dumping journaux récents pour application modules en org
heavenize / espace dev comme admin...
2015-06-07T12:05:15.69+0200 [API] OUT Created app with guid
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:05:16.10+0200 [API] OUT Updated app with guid
1fbf3378-6512-46de-bae4-02ee30275464
({"route"=>"68e27d8d-4ff6-443b-a3e0-416c40d325d3"})
2015-06-07T12:12:21.55+0200 [DEA] OUT Got staging request for app with
id 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:12:25.40+0200 [API] OUT Updated app with guid
1fbf3378-6512-46de-bae4-02ee30275464 ({"state"=>"STARTED"})
2015-06-07T12:12:38.08+0200 [STG] OUT -----> Downloaded app package
(163M)
2015-06-07T12:13:40.93+0200 [STG] ERR Cloning into
'/tmp/buildpacks/java-buildpack'...
2015-06-07T12:14:01.67+0200 [STG] OUT -----> Java Buildpack
Version: c862ac8 | https://github.com/cloudfoundry/java-buildpack#c862ac8
2015-06-07T12:14:13.89+0200 [STG] OUT -----> Downloading Open Jdk JRE
1.8.0_45 from
https://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.8.0_45.tar.gz
(11.6s)
2015-06-07T12:14:15.34+0200 [STG] OUT Expanding Open Jdk JRE to
.java-buildpack/open_jdk_jre (1.4s)
2015-06-07T12:14:15.89+0200 [STG] OUT -----> Downloading Open JDK Like
Memory Calculator 1.1.1_RELEASE from
https://download.run.pivotal.io/memory-calculator/lucid/x86_64/memory-calculator-1.1.1_RELEASE
(0.5s)
2015-06-07T12:14:15.90+0200 [STG] OUT Memory Settings:
-XX:MaxMetaspaceSize=419430K -XX:MetaspaceSize=419430K -Xss1M -Xmx3G -Xms3G
2015-06-07T12:15:43.44+0200 [STG] OUT -----> Uploading droplet (151M)
2015-06-07T12:16:07.51+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:16:29.66+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"9d55c5f791324d358bffb4c961a4c7ee", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672189}
2015-06-07T12:17:14.18+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:17:31.10+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"2ae0c26f33864f40989ee870a1b9e3db", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672251}
2015-06-07T12:17:38.48+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:17:38.48+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:17:38.48+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:17:55.31+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:11.82+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"dc872d38f3324af481c82ba67f0e216c", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672291}
2015-06-07T12:18:18.69+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:18.69+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:18.69+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:34.06+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:50.98+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"623c3af7e3e84801b6fd44eeee9c0a12", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672330}
2015-06-07T12:18:58.80+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:58.80+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:58.80+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:19:34.08+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:19:50.36+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"40727eea293146948af197e13443843c", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672390}
2015-06-07T12:19:59.01+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:19:59.01+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:19:59.01+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:21:04.12+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:21:20.61+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"f7ffff55692a418c847f4f37be574ddf", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672480}
2015-06-07T12:21:29.43+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:21:29.43+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:21:29.47+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:23:34.16+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:23:49.97+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"4581e97c6b0f4504b8d64a5c69d6787b", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672629}
2015-06-07T12:23:50.29+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:23:50.29+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:23:50.29+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:28:14.24+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:28:29.82+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"f98749490a6743598f57d3848eb06177", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672909}
2015-06-07T12:28:31.73+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:28:31.73+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:28:31.73+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
[1] http://docs.cloudfoundry.org/buildpacks/java/java-tips.html#debugging
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
(No subject)
Arbi Akhina
I'm trying to push an executable JAR to a bosh-lite instance, on the logs I
see CF trying many times to restart the app and eventually fail.
I can't find out why the app crashes as there is no app logs returned by
CF. I tried to remotely debug the app (as described in [1]) but nothing
happens on eclipse. Any hint to solve this issue is appreciated.
1. I'm launching the app with:
*cf push -t 180*
2. Here is the manifest.yml content:
---
applications:
- name: modules
memory: 4G
instances: 1
host: modules-${random-word}
path: modules.jar
buildpack: https://github.com/cloudfoundry/java-buildpack
env:
JAVA_OPTS: "$JAVA_OPTS -agentlib:jdwp=transport=dt_socket,address=
192.168.2.8:8000"
3. Here is the log:
Connecté, le dumping journaux récents pour application modules en org
heavenize / espace dev comme admin...
2015-06-07T12:05:15.69+0200 [API] OUT Created app with guid
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:05:16.10+0200 [API] OUT Updated app with guid
1fbf3378-6512-46de-bae4-02ee30275464
({"route"=>"68e27d8d-4ff6-443b-a3e0-416c40d325d3"})
2015-06-07T12:12:21.55+0200 [DEA] OUT Got staging request for app with
id 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:12:25.40+0200 [API] OUT Updated app with guid
1fbf3378-6512-46de-bae4-02ee30275464 ({"state"=>"STARTED"})
2015-06-07T12:12:38.08+0200 [STG] OUT -----> Downloaded app package
(163M)
2015-06-07T12:13:40.93+0200 [STG] ERR Cloning into
'/tmp/buildpacks/java-buildpack'...
2015-06-07T12:14:01.67+0200 [STG] OUT -----> Java Buildpack
Version: c862ac8 | https://github.com/cloudfoundry/java-buildpack#c862ac8
2015-06-07T12:14:13.89+0200 [STG] OUT -----> Downloading Open Jdk JRE
1.8.0_45 from
https://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.8.0_45.tar.gz
(11.6s)
2015-06-07T12:14:15.34+0200 [STG] OUT Expanding Open Jdk JRE to
.java-buildpack/open_jdk_jre (1.4s)
2015-06-07T12:14:15.89+0200 [STG] OUT -----> Downloading Open JDK Like
Memory Calculator 1.1.1_RELEASE from
https://download.run.pivotal.io/memory-calculator/lucid/x86_64/memory-calculator-1.1.1_RELEASE
(0.5s)
2015-06-07T12:14:15.90+0200 [STG] OUT Memory Settings:
-XX:MaxMetaspaceSize=419430K -XX:MetaspaceSize=419430K -Xss1M -Xmx3G -Xms3G
2015-06-07T12:15:43.44+0200 [STG] OUT -----> Uploading droplet (151M)
2015-06-07T12:16:07.51+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:16:29.66+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"9d55c5f791324d358bffb4c961a4c7ee", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672189}
2015-06-07T12:17:14.18+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:17:31.10+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"2ae0c26f33864f40989ee870a1b9e3db", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672251}
2015-06-07T12:17:38.48+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:17:38.48+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:17:38.48+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:17:55.31+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:11.82+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"dc872d38f3324af481c82ba67f0e216c", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672291}
2015-06-07T12:18:18.69+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:18.69+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:18.69+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:34.06+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:50.98+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"623c3af7e3e84801b6fd44eeee9c0a12", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672330}
2015-06-07T12:18:58.80+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:58.80+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:58.80+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:19:34.08+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:19:50.36+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"40727eea293146948af197e13443843c", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672390}
2015-06-07T12:19:59.01+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:19:59.01+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:19:59.01+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:21:04.12+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:21:20.61+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"f7ffff55692a418c847f4f37be574ddf", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672480}
2015-06-07T12:21:29.43+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:21:29.43+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:21:29.47+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:23:34.16+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:23:49.97+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"4581e97c6b0f4504b8d64a5c69d6787b", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672629}
2015-06-07T12:23:50.29+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:23:50.29+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:23:50.29+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:28:14.24+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:28:29.82+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"f98749490a6743598f57d3848eb06177", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672909}
2015-06-07T12:28:31.73+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:28:31.73+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:28:31.73+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
[1] http://docs.cloudfoundry.org/buildpacks/java/java-tips.html#debugging
see CF trying many times to restart the app and eventually fail.
I can't find out why the app crashes as there is no app logs returned by
CF. I tried to remotely debug the app (as described in [1]) but nothing
happens on eclipse. Any hint to solve this issue is appreciated.
1. I'm launching the app with:
*cf push -t 180*
2. Here is the manifest.yml content:
---
applications:
- name: modules
memory: 4G
instances: 1
host: modules-${random-word}
path: modules.jar
buildpack: https://github.com/cloudfoundry/java-buildpack
env:
JAVA_OPTS: "$JAVA_OPTS -agentlib:jdwp=transport=dt_socket,address=
192.168.2.8:8000"
3. Here is the log:
Connecté, le dumping journaux récents pour application modules en org
heavenize / espace dev comme admin...
2015-06-07T12:05:15.69+0200 [API] OUT Created app with guid
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:05:16.10+0200 [API] OUT Updated app with guid
1fbf3378-6512-46de-bae4-02ee30275464
({"route"=>"68e27d8d-4ff6-443b-a3e0-416c40d325d3"})
2015-06-07T12:12:21.55+0200 [DEA] OUT Got staging request for app with
id 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:12:25.40+0200 [API] OUT Updated app with guid
1fbf3378-6512-46de-bae4-02ee30275464 ({"state"=>"STARTED"})
2015-06-07T12:12:38.08+0200 [STG] OUT -----> Downloaded app package
(163M)
2015-06-07T12:13:40.93+0200 [STG] ERR Cloning into
'/tmp/buildpacks/java-buildpack'...
2015-06-07T12:14:01.67+0200 [STG] OUT -----> Java Buildpack
Version: c862ac8 | https://github.com/cloudfoundry/java-buildpack#c862ac8
2015-06-07T12:14:13.89+0200 [STG] OUT -----> Downloading Open Jdk JRE
1.8.0_45 from
https://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.8.0_45.tar.gz
(11.6s)
2015-06-07T12:14:15.34+0200 [STG] OUT Expanding Open Jdk JRE to
.java-buildpack/open_jdk_jre (1.4s)
2015-06-07T12:14:15.89+0200 [STG] OUT -----> Downloading Open JDK Like
Memory Calculator 1.1.1_RELEASE from
https://download.run.pivotal.io/memory-calculator/lucid/x86_64/memory-calculator-1.1.1_RELEASE
(0.5s)
2015-06-07T12:14:15.90+0200 [STG] OUT Memory Settings:
-XX:MaxMetaspaceSize=419430K -XX:MetaspaceSize=419430K -Xss1M -Xmx3G -Xms3G
2015-06-07T12:15:43.44+0200 [STG] OUT -----> Uploading droplet (151M)
2015-06-07T12:16:07.51+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:16:29.66+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"9d55c5f791324d358bffb4c961a4c7ee", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672189}
2015-06-07T12:17:14.18+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:17:31.10+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"2ae0c26f33864f40989ee870a1b9e3db", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672251}
2015-06-07T12:17:38.48+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:17:38.48+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:17:38.48+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:17:55.31+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:11.82+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"dc872d38f3324af481c82ba67f0e216c", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672291}
2015-06-07T12:18:18.69+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:18.69+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:18.69+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:34.06+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:50.98+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"623c3af7e3e84801b6fd44eeee9c0a12", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672330}
2015-06-07T12:18:58.80+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:58.80+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:18:58.80+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:19:34.08+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:19:50.36+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"40727eea293146948af197e13443843c", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672390}
2015-06-07T12:19:59.01+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:19:59.01+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:19:59.01+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:21:04.12+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:21:20.61+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"f7ffff55692a418c847f4f37be574ddf", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672480}
2015-06-07T12:21:29.43+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:21:29.43+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:21:29.47+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:23:34.16+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:23:49.97+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"4581e97c6b0f4504b8d64a5c69d6787b", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672629}
2015-06-07T12:23:50.29+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:23:50.29+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:23:50.29+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:28:14.24+0200 [DEA] OUT Starting app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:28:29.82+0200 [API] OUT App instance exited with guid
1fbf3378-6512-46de-bae4-02ee30275464 payload: {"cc_partition"=>"default",
"droplet"=>"1fbf3378-6512-46de-bae4-02ee30275464",
"version"=>"703f793c-e388-41ac-b1f1-c564b301ca70",
"instance"=>"f98749490a6743598f57d3848eb06177", "index"=>0,
"reason"=>"CRASHED", "exit_status"=>-1, "exit_description"=>"failed to
start", "crash_timestamp"=>1433672909}
2015-06-07T12:28:31.73+0200 [DEA] OUT Removing crash for app with id
1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:28:31.73+0200 [DEA] OUT Stopping app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
2015-06-07T12:28:31.73+0200 [DEA] OUT Stopped app instance (index 0)
with guid 1fbf3378-6512-46de-bae4-02ee30275464
[1] http://docs.cloudfoundry.org/buildpacks/java/java-tips.html#debugging
Re: What ports will be needed to support hm and loggregator
Erik Jasiak <ejasiak@...>
Loggregator listens for the "old"[1], loggregator message format on 3456,
and the newer dropsonde messages on 3457. We're actively working to mark
the old loggregatorlib as deprecated.
"Do you know why 3457 is used when the incoming listening port for
loggregator is specified as 3456?" - I found in our Bosh Manifest readme
example that this was incorrect, and someone is fixing that now. Did you
read it anywhere else we need to update?
Thanks,
Erik
[1] https://github.com/cloudfoundry/loggregatorlib/tree/master/logmessage
toggle quoted message
Show quoted text
and the newer dropsonde messages on 3457. We're actively working to mark
the old loggregatorlib as deprecated.
"Do you know why 3457 is used when the incoming listening port for
loggregator is specified as 3456?" - I found in our Bosh Manifest readme
example that this was incorrect, and someone is fixing that now. Did you
read it anywhere else we need to update?
Thanks,
Erik
[1] https://github.com/cloudfoundry/loggregatorlib/tree/master/logmessage
On Sat, Jun 6, 2015 at 8:29 AM, Jason Huang <jasonxs.huang(a)gmail.com> wrote:
Yes he also listens to udp4. By allowing udp 3456 and 3457, it worked.
Do you know why 3457 is used when the incoming listening port for
loggregator is specified as 3456?
Sent from my iPhone
On Jun 5, 2015, at 1:58 AM, Lev Berman <lev.berman(a)altoros.com> wrote:
We found the loggregator was listening on port 3456 and 3457 with upd6.
He also probably listens for udp4 connections. Have you tried to allow
udp4 traffic to ports 3456-3457 and check if loggregator collects the logs
after it?
On Fri, Jun 5, 2015 at 4:17 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:
We found the loggregator was listening on port 3456 and 3457 with upd6.
udp6 0 0 [::]:3457 [::]:*
But we can’t use ipv6 in our env. So is there any way to force
loggregator to use ipv4?
Thanks,
Maggie
*From:* cf-dev-bounces(a)lists.cloudfoundry.org [mailto:
cf-dev-bounces(a)lists.cloudfoundry.org] *On Behalf Of *Lev Berman
*Sent:* 2015年6月2日 20:05
*To:* Discussions about Cloud Foundry projects and the system overall.
*Subject:* Re: [cf-dev] What ports will be needed to support hm and
loggregator
Sorry, I've missed your notes about the firewalls you configure for each
CF machine - this firewalls is what needs to be configured to accept UDP
traffic to ports 3456 and 3457 from any host. vSphere itself will probably
allow this traffic without any additional configuration.
On Tue, Jun 2, 2015 at 1:51 PM, Berman Lev <lev.berman(a)altoros.com>
wrote:
I have never worked with vSphere, unfortunately. I've googled a bit and
found this table which shows which TCP and UDP ports are open by default on
vSphere VMs -
https://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.security.doc/GUID-ECEA77F5-D38E-4339-9B06-FF9B78E94B68.html.
Consult the vSphere documentation to find out how to add UDP 3456 and 3457
ports to this list.
On Tue, Jun 2, 2015 at 1:32 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:
I deployed my CF on vshpere server.
*From:* cf-dev-bounces(a)lists.cloudfoundry.org [mailto:
cf-dev-bounces(a)lists.cloudfoundry.org] *On Behalf Of *Lev Berman
*Sent:* 2015年6月2日 18:30
*To:* Discussions about Cloud Foundry projects and the system overall.
*Subject:* Re: [cf-dev] What ports will be needed to support hm and
loggregator
You have posted your Application Security Groups -
http://docs.pivotal.io/pivotalcf/adminguide/app-sec-groups.html. This
groups are created and managed by Cloud Foundry.
But the issue here is with security groups configured in your
infrastructure - AWS, OpenStack, etc. Which one is your CF deployed on?
On Tue, Jun 2, 2015 at 1:23 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:
Hi, Lev
Would you please let me know what exactly I should add to my security
group? Following are the current configuration.
- name: public_networks
rules:
- protocol: all
destination: 0.0.0.0-9.255.255.255
- protocol: all
destination: 11.0.0.0-169.253.255.255
- protocol: all
destination: 169.255.0.0-172.15.255.255
- protocol: all
destination: 172.32.0.0-192.167.255.255
- protocol: all
destination: 192.169.0.0-255.255.255.255
- name: dns
rules:
- protocol: tcp
destination: 0.0.0.0/0
ports: '53'
- protocol: udp
destination: 0.0.0.0/0
ports: '53'
default_running_security_groups:
- public_networks
- dns
default_staging_security_groups:
- public_networks
- dns
Thanks,
Maggie
*From:* cf-dev-bounces(a)lists.cloudfoundry.org [mailto:
cf-dev-bounces(a)lists.cloudfoundry.org] *On Behalf Of *Lev Berman
*Sent:* 2015年6月2日 18:16
*To:* Discussions about Cloud Foundry projects and the system overall.
*Subject:* Re: [cf-dev] What ports will be needed to support hm and
loggregator
Hi,
At least for loggregator to successflly talk to metron agents, you need
to add a rule to a security group for your private subnet allowing the
ingress UDP traffic through ports 3456 and 3457 from all hosts (0.0.0.0/0).
See more about security group rules needed for CF here -
http://docs.cloudfoundry.org/deploying/common/security_groups.html.
On Tue, Jun 2, 2015 at 1:04 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:
Hi,
I am updating my cf env from 172 to 197. But I found some issues after
upgrade is done. I couldn’t get the correct running application instance
number:
CF_TRACE=true cf apps
…
"running_instances": -1,
…
application started ?/3
Another issue is I can’t get log information from loggregator. “cf logs”
showed nothing after I restarted my application.
I think this may be related to our firewall configuration. Because in
another environment where no firewall is configured, hm and loggregator
work perfectly well. We have firewalls for deas, routers and all other
components separately(three firewalls). So would anyone please tell me what
ports we should open for deas, routers or other components?
Thanks,
Maggie
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github*: *https://github.com/ldmberman
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github*: *https://github.com/ldmberman
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github*: *https://github.com/ldmberman
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github*: *https://github.com/ldmberman
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github
*: https://github.com/ldmberman <https://github.com/ldmberman>*
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
R: Re: Log connections from security groups - bosh lite
Michael Grifalconi <michael.grifalconi@...>
Hello, I post some more info:
Kernel logging is enabled because inside the DEA, i can see:
cat /etc/rsyslog.conf
[...]
$IncludeConfig /etc/rsyslog.d/*.conf
cat /etc/rsyslog.d/enable-kernel-logging.conf
$ModLoad imklog
after pushing an app, I see on the DEA the correct rules:
-A warden-i-18nvgifiemi -p tcp -m tcp --dport 80 -g warden-i-18nvgifiemi-log
-A warden-i-18nvgifiemi-log -p tcp -m conntrack --ctstate INVALID,NEW,UNTRACKED -j LOG --log-prefix "warden-i-18nvgifiemi "
but on /var/log/messages I only get:
Jun 8 07:03:26 localhost kernel: [ 3256.433021] IPv6: ADDRCONF(NETDEV_CHANGE): w-18nvgifiemg-0: link becomes ready
the php application pushed:
xx(a)boshClient:~/myPhpApp$ cat index.php
<html>
<head>
<title>PHP Test</title>
</head>
<body>
<?php
echo '<p>Hello PHP from the server at:</p>';
echo $_SERVER['SERVER_ADDR'];
echo '<p>hi from hostname:</p>';
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'http://xxxxxxx');
$result = curl_exec($curl);
echo gethostname();
?>
</body>
</html>
When I browse this application page, I see the page from the webserver on xxxx called from curl, but I don't get ant log.
bosh stemcells
+---------------------------------------------+---------+--------------------------------------+
| Name | Version | CID |
+---------------------------------------------+---------+--------------------------------------+
| bosh-warden-boshlite-ubuntu-trusty-go_agent | 2776* | c5ac6590-13ec-4ba2-6fa9-e78cf553c4e6 |
+---------------------------------------------+---------+--------------------------------------+
--------------------------------------------------------------------
xx(a)boshClient:~$ cf security-groups
Getting security groups as admin
OK
Name Organization Space
#0 public_networks
#1 dns
#2 logging myOrg myDevSpace
xx(a)boshClient:~$ cf security-group logging
Getting info for security group logging as admin
OK
Name logging
Rules
[
{
"destination": "0.0.0.0/0",
"log": true,
"ports": "80",
"protocol": "tcp"
}
]
Organization Space
#0 myOrg myDevSpace
tried with protocol: all and :tcp and the port where my local apache server on LAN is listening.
Any suggestion is appreciated!
Regards,
Michael
toggle quoted message
Show quoted text
Kernel logging is enabled because inside the DEA, i can see:
cat /etc/rsyslog.conf
[...]
$IncludeConfig /etc/rsyslog.d/*.conf
cat /etc/rsyslog.d/enable-kernel-logging.conf
$ModLoad imklog
after pushing an app, I see on the DEA the correct rules:
-A warden-i-18nvgifiemi -p tcp -m tcp --dport 80 -g warden-i-18nvgifiemi-log
-A warden-i-18nvgifiemi-log -p tcp -m conntrack --ctstate INVALID,NEW,UNTRACKED -j LOG --log-prefix "warden-i-18nvgifiemi "
but on /var/log/messages I only get:
Jun 8 07:03:26 localhost kernel: [ 3256.433021] IPv6: ADDRCONF(NETDEV_CHANGE): w-18nvgifiemg-0: link becomes ready
the php application pushed:
xx(a)boshClient:~/myPhpApp$ cat index.php
<html>
<head>
<title>PHP Test</title>
</head>
<body>
<?php
echo '<p>Hello PHP from the server at:</p>';
echo $_SERVER['SERVER_ADDR'];
echo '<p>hi from hostname:</p>';
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'http://xxxxxxx');
$result = curl_exec($curl);
echo gethostname();
?>
</body>
</html>
When I browse this application page, I see the page from the webserver on xxxx called from curl, but I don't get ant log.
bosh stemcells
+---------------------------------------------+---------+--------------------------------------+
| Name | Version | CID |
+---------------------------------------------+---------+--------------------------------------+
| bosh-warden-boshlite-ubuntu-trusty-go_agent | 2776* | c5ac6590-13ec-4ba2-6fa9-e78cf553c4e6 |
+---------------------------------------------+---------+--------------------------------------+
--------------------------------------------------------------------
xx(a)boshClient:~$ cf security-groups
Getting security groups as admin
OK
Name Organization Space
#0 public_networks
#1 dns
#2 logging myOrg myDevSpace
xx(a)boshClient:~$ cf security-group logging
Getting info for security group logging as admin
OK
Name logging
Rules
[
{
"destination": "0.0.0.0/0",
"log": true,
"ports": "80",
"protocol": "tcp"
}
]
Organization Space
#0 myOrg myDevSpace
tried with protocol: all and :tcp and the port where my local apache server on LAN is listening.
Any suggestion is appreciated!
Regards,
Michael
Il 06/06/15 09:25, Dieu Cao <dcao(a)pivotal.io> ha scritto:
Yes, I do recall that the feature did not work on bosh-lite but that was when kernel logging was disabled on the trusty stemcell.
Michael, could you send the json for the application security group you've applied to the space you're looking at?
-Dieu
CF Runtime PM
On Fri, Jun 5, 2015 at 5:48 PM, James Bayer <jbayer(a)pivotal.io> wrote:
i seem to remember something about app security group logging having an issue with bosh-lite that isn't present when you have a DEA in a VM. i remember something about that. i'll see if dieu remembers.
On Fri, Jun 5, 2015 at 1:06 PM, Michael <michael.grifalconi(a)studenti.unimi.it> wrote:Hello,
as you suggested, I looked deeper in this matter, and I can see that on the DEA VM:
I get the right iptables rules, but I still can not see the logs on /var/log/messages
[Im using bosh-lite, latest stemcell, CF version 207]
Do you know what should I do to allow this information to be logged?
ref:https://www.pivotaltracker.com/n/projects/966314/stories/90078842
Thank you!
Best regards,
Michael
****************
Per destinare il 5x1000 all'Universita' degli Studi di Milano: indicare nella dichiarazione dei redditi il codice fiscale 80012650158.
http://www.unimi.it/13084.htm?utm_source=firmaMail&utm_medium=email&utm_content=linkFirmaEmail&utm_campaign=5xmille
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
--
Thank you,
James Bayer
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Is there any officail Chinese docs?
tan_bw@...
I can't open the Chinese doc website http://cndocs.cloudfoundry.com.Is this website closed?Is there any other substitute?
Re: Adding second domain to API endpoint
James Bayer
it appears as though the cloud controller only supports advertising a
single domain:
https://github.com/cloudfoundry/cf-release/blob/master/jobs/cloud_controller_ng/spec#L62-L63
i wonder if a CNAME would work for this as a work-around?
On Sat, Jun 6, 2015 at 9:19 AM, Vinicius Carvalho <
viniciusccarvalho(a)gmail.com> wrote:
--
Thank you,
James Bayer
single domain:
https://github.com/cloudfoundry/cf-release/blob/master/jobs/cloud_controller_ng/spec#L62-L63
i wonder if a CNAME would work for this as a work-around?
On Sat, Jun 6, 2015 at 9:19 AM, Vinicius Carvalho <
viniciusccarvalho(a)gmail.com> wrote:
Hi there, how can I add a second domain to my API endpoint? I know how to
add a shared domain for apps. But looking at the router /routes, the
api.<domain> maps directly to the cloudcontroller. How can I add a second
route using a different domain already mapped on my DNS?
Regards
--
The intuitive mind is a sacred gift and the
rational mind is a faithful servant. We have
created a society that honors the servant and
has forgotten the gift.
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
--
Thank you,
James Bayer
Adding second domain to API endpoint
Vinicius Carvalho <viniciusccarvalho@...>
Hi there, how can I add a second domain to my API endpoint? I know how to
add a shared domain for apps. But looking at the router /routes, the
api.<domain> maps directly to the cloudcontroller. How can I add a second
route using a different domain already mapped on my DNS?
Regards
--
The intuitive mind is a sacred gift and the
rational mind is a faithful servant. We have
created a society that honors the servant and
has forgotten the gift.
add a shared domain for apps. But looking at the router /routes, the
api.<domain> maps directly to the cloudcontroller. How can I add a second
route using a different domain already mapped on my DNS?
Regards
--
The intuitive mind is a sacred gift and the
rational mind is a faithful servant. We have
created a society that honors the servant and
has forgotten the gift.
Re: What ports will be needed to support hm and loggregator
Jason Huang
Yes he also listens to udp4. By allowing udp 3456 and 3457, it worked.
Do you know why 3457 is used when the incoming listening port for loggregator is specified as 3456?
Sent from my iPhone
toggle quoted message
Show quoted text
Do you know why 3457 is used when the incoming listening port for loggregator is specified as 3456?
Sent from my iPhone
On Jun 5, 2015, at 1:58 AM, Lev Berman <lev.berman(a)altoros.com> wrote:
We found the loggregator was listening on port 3456 and 3457 with upd6.He also probably listens for udp4 connections. Have you tried to allow udp4 traffic to ports 3456-3457 and check if loggregator collects the logs after it?On Fri, Jun 5, 2015 at 4:17 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote:
We found the loggregator was listening on port 3456 and 3457 with upd6.
udp6 0 0 [::]:3457 [::]:*
But we can’t use ipv6 in our env. So is there any way to force loggregator to use ipv4?
Thanks,
Maggie
From: cf-dev-bounces(a)lists.cloudfoundry.org [mailto:cf-dev-bounces(a)lists.cloudfoundry.org] On Behalf Of Lev Berman
Sent: 2015年6月2日 20:05
To: Discussions about Cloud Foundry projects and the system overall.
Subject: Re: [cf-dev] What ports will be needed to support hm and loggregator
Sorry, I've missed your notes about the firewalls you configure for each CF machine - this firewalls is what needs to be configured to accept UDP traffic to ports 3456 and 3457 from any host. vSphere itself will probably allow this traffic without any additional configuration.
On Tue, Jun 2, 2015 at 1:51 PM, Berman Lev <lev.berman(a)altoros.com> wrote:
I have never worked with vSphere, unfortunately. I've googled a bit and found this table which shows which TCP and UDP ports are open by default on vSphere VMs - https://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.security.doc/GUID-ECEA77F5-D38E-4339-9B06-FF9B78E94B68.html. Consult the vSphere documentation to find out how to add UDP 3456 and 3457 ports to this list.
On Tue, Jun 2, 2015 at 1:32 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote:
I deployed my CF on vshpere server.
From: cf-dev-bounces(a)lists.cloudfoundry.org [mailto:cf-dev-bounces(a)lists.cloudfoundry.org] On Behalf Of Lev Berman
Sent: 2015年6月2日 18:30
To: Discussions about Cloud Foundry projects and the system overall.
Subject: Re: [cf-dev] What ports will be needed to support hm and loggregator
You have posted your Application Security Groups - http://docs.pivotal.io/pivotalcf/adminguide/app-sec-groups.html. This groups are created and managed by Cloud Foundry.
But the issue here is with security groups configured in your infrastructure - AWS, OpenStack, etc. Which one is your CF deployed on?
On Tue, Jun 2, 2015 at 1:23 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote:
Hi, Lev
Would you please let me know what exactly I should add to my security group? Following are the current configuration.
- name: public_networks
rules:
- protocol: all
destination: 0.0.0.0-9.255.255.255
- protocol: all
destination: 11.0.0.0-169.253.255.255
- protocol: all
destination: 169.255.0.0-172.15.255.255
- protocol: all
destination: 172.32.0.0-192.167.255.255
- protocol: all
destination: 192.169.0.0-255.255.255.255
- name: dns
rules:
- protocol: tcp
destination: 0.0.0.0/0
ports: '53'
- protocol: udp
destination: 0.0.0.0/0
ports: '53'
default_running_security_groups:
- public_networks
- dns
default_staging_security_groups:
- public_networks
- dns
Thanks,
Maggie
From: cf-dev-bounces(a)lists.cloudfoundry.org [mailto:cf-dev-bounces(a)lists.cloudfoundry.org] On Behalf Of Lev Berman
Sent: 2015年6月2日 18:16
To: Discussions about Cloud Foundry projects and the system overall.
Subject: Re: [cf-dev] What ports will be needed to support hm and loggregator
Hi,
At least for loggregator to successflly talk to metron agents, you need to add a rule to a security group for your private subnet allowing the ingress UDP traffic through ports 3456 and 3457 from all hosts (0.0.0.0/0). See more about security group rules needed for CF here - http://docs.cloudfoundry.org/deploying/common/security_groups.html.
On Tue, Jun 2, 2015 at 1:04 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote:
Hi,
I am updating my cf env from 172 to 197. But I found some issues after upgrade is done. I couldn’t get the correct running application instance number:
CF_TRACE=true cf apps
…
"running_instances": -1,
…
application started ?/3
Another issue is I can’t get log information from loggregator. “cf logs” showed nothing after I restarted my application.
I think this may be related to our firewall configuration. Because in another environment where no firewall is configured, hm and loggregator work perfectly well. We have firewalls for deas, routers and all other components separately(three firewalls). So would anyone please tell me what ports we should open for deas, routers or other components?
Thanks,
Maggie
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github: https://github.com/ldmberman
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github: https://github.com/ldmberman
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github: https://github.com/ldmberman
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github: https://github.com/ldmberman
--
Lev Berman
Altoros - Cloud Foundry deployment, training and integration
Github: https://github.com/ldmberman
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Re: Log connections from security groups - bosh lite
Dieu Cao <dcao@...>
Yes, I do recall that the feature did not work on bosh-lite but that was
when kernel logging was disabled on the trusty stemcell.
Michael, could you send the json for the application security group you've
applied to the space you're looking at?
-Dieu
CF Runtime PM
toggle quoted message
Show quoted text
when kernel logging was disabled on the trusty stemcell.
Michael, could you send the json for the application security group you've
applied to the space you're looking at?
-Dieu
CF Runtime PM
On Fri, Jun 5, 2015 at 5:48 PM, James Bayer <jbayer(a)pivotal.io> wrote:
i seem to remember something about app security group logging having an
issue with bosh-lite that isn't present when you have a DEA in a VM. i
remember something about that. i'll see if dieu remembers.
On Fri, Jun 5, 2015 at 1:06 PM, Michael <
michael.grifalconi(a)studenti.unimi.it> wrote:Hello,
as you suggested, I looked deeper in this matter, and I can see that on
the DEA VM:
I get the right iptables rules, but I still can not see the logs on
/var/log/messages
[Im using bosh-lite, latest stemcell, CF version 207]
Do you know what should I do to allow this information to be logged?
ref:https://www.pivotaltracker.com/n/projects/966314/stories/90078842
Thank you!
Best regards,
Michael
****************
Per destinare il 5x1000 all'Universita' degli Studi di Milano: indicare
nella dichiarazione dei redditi il codice fiscale 80012650158.
http://www.unimi.it/13084.htm?utm_source=firmaMail&utm_medium=email&utm_content=linkFirmaEmail&utm_campaign=5xmille
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
--
Thank you,
James Bayer
cf-release v211 is available
Dieu Cao <dcao@...>
The cf-release v211 was released on June 4th, 2015
- IMPORTANT: This release removes lucid64 stack, please ensure apps are
migrated prior to upgrade
- IMPORTANT: If using the postgres included within cf-release, please
carefully read the note below about the upgrade to postgres 9.4.2
Runtime
- Remove lucid64 stack completely from cf-release details
<https://www.pivotaltracker.com/story/show/95483678>
- Please ensure all your applications have migrated to the cflinuxfs2
stack prior to upgrading to this release
- Once all apps have been migrated to the new stack, Operators will
need to manually delete the lucid64 stack via the cc api using the admin
user.
-
http://apidocs.cloudfoundry.org/211/stacks/delete_a_particular_stack.html
- Upgraded postgres included in cf-release to postgres 9.4.2 details
<https://www.pivotaltracker.com/story/show/77680398>
- See note below about the postgres job upgrade
- [Experimental] Work continues on /v3 and Application Process Types
details <https://www.pivotaltracker.com/epic/show/1334418>
- [Experimental] Work continues on Route API details
<https://www.pivotaltracker.com/epic/show/1590160>
- [Experimental] Work continues on Context Path Routes details
<https://www.pivotaltracker.com/epic/show/1808212>
- Work in progress for support of user-provided tags on service
instances details <https://www.pivotaltracker.com/epic/show/1879702>
- cloudfoundry/cf-release #689
<https://github.com/cloudfoundry/cf-release/pull/689>: Fixing failed
cc_ng and cc_ng_worker with NFS details
<https://www.pivotaltracker.com/story/show/95602450>
- Remove default support address for CC details
<https://www.pivotaltracker.com/story/show/92724640>
- increased cloud_controller_ng start timeout to be able to run long
ccdb migrations details
<https://github.com/cloudfoundry/cf-release/commit/ff57572d67c9c0e7e38d9b2298762faba8547727>
- cloudfoundry/cf-release #680
<https://github.com/cloudfoundry/cf-release/pull/680>: staticfile to be
tested before nodejs/ruby buildpacks details
<https://www.pivotaltracker.com/story/show/94594952>
- cloudfoundry/stacks #16
<https://github.com/cloudfoundry/cf-release/pull/16>: Add cmake to
rootfses details <https://www.pivotaltracker.com/story/show/94022672>
- cloudfoundry/stacks #17
<https://github.com/cloudfoundry/cf-release/pull/17>: Add autoconf to
rootfs details <https://www.pivotaltracker.com/story/show/94411176>
- cloudfoundry/cf-release #682
<https://github.com/cloudfoundry/cf-release/pull/682>: Upgrading ruby
buildpack to v1.4.2 details
<https://www.pivotaltracker.com/story/show/95087396>
- cloudfoundry/cf-release #683
<https://github.com/cloudfoundry/cf-release/pull/683>: Upgrading python
buildpack to v1.3.2 details
<https://www.pivotaltracker.com/story/show/95163484>
- Make 'dea_next.stacks' overridable in the manifest. details
<https://www.pivotaltracker.com/story/show/92393276>
- cloudfoundry/cf-release #681
<https://github.com/cloudfoundry/cf-release/pull/681>: Add security
group for cf-mysql subnets on bosh-lite details
<https://www.pivotaltracker.com/story/show/95024316>
- cloudfoundry/dea_ng #164
<https://github.com/cloudfoundry/cf-release/pull/164>: Add warden_handle
method to staging task details
<https://www.pivotaltracker.com/story/show/95427526>
- Use MASQUERADE instead of SNAT for container NAT details
<https://github.com/cloudfoundry/warden/commit/4f1e5c049a12199fdd1f29cde15c9a786bd5fac8>
- Throw better errors for apps stats endpoint details
<https://www.pivotaltracker.com/story/show/93268820>
- Fix buildpack_cache deletion issue details
<https://www.pivotaltracker.com/story/show/95474242>
Loggregator
-
If no Dopplers available in an AZ, Metron will now fail over across AZs.
details <https://www.pivotaltracker.com/story/show/86649938>
-
StatsD support broken out of Metron and into a separate process. New
class of items for adding data into metron/loggregator now known as an
“injectors." Further info to follow on cf-dev.
- details <https://www.pivotaltracker.com/story/show/95065248>.
- repo <https://github.com/cloudfoundry/statsd-injector>.
-
All loggregator metrics now using a Metron /varz shim instead of writing
to a local /varz.
- Most loggregator metrics will have a different prefix as a result.
- All former metrics and new ones are documented - in wiki
<https://github.com/cloudfoundry/loggregator/wiki/Loggregator--varz-metrics-page>
(scroll
right) and in a publicgoogle doc
<https://docs.google.com/spreadsheets/d/176yIaJChXEmvm-CjybmwopdRGQfDGrSzo3J_Mx8mMnk/edit?usp=sharing>
.
- Story details <https://www.pivotaltracker.com/story/show/95539818>.
- Other CF Components to follow; docs to be formalized with
documentation team.
-
NOAA client library fixed Close() issue, independent of CF release.
Change is backward-incompatible.
- details <https://www.pivotaltracker.com/story/show/94103174> | cf-dev
announcement
<http://lists.cloudfoundry.org/pipermail/cf-dev/2015-June/000316.html>
| github diff
<https://github.com/cloudfoundry/noaa/commit/0de0770ca632948b6ae49ab28c1c04e260d31bbb>
-
Removed Dropsonde protocol dependence on gogoproto for non-go builds.
details <https://www.pivotaltracker.com/story/show/94688854>
-
Increase doppler marshal/unmarshal efficiency to compensate for message
size changes.details <https://www.pivotaltracker.com/story/show/93439456>
-
[Bug Fix] Syslog drain binder is no longer leaking connections to
cloud_controller. details
<https://www.pivotaltracker.com/story/show/93932106>
-
[Bug Fix] LoggregatorClientPool no longer leaking clients to
non-existant dopplers. details
<https://www.pivotaltracker.com/story/show/95008094>
Used Configuration
- BOSH Version: 152
- Stemcell Version: 2969
- CC Api Version: 2.28.0
Commit summary
<http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-211-whats-in-the-deploy.html>
Compatible Diego Version
- final release 0.1281.0 commit
<https://github.com/cloudfoundry-incubator/diego-release/commit/fc114972868c3adc544f22860ef77593cb624e64>
Postgres Job Upgrade
The Postgres Job will upgrade the postgres database to version 9.4.2.
Postgres will be unavailable during this upgrade.
A copy of the database is made for the upgrade, you may need to adjust the
persistent disk capacity of the postgres job.
If the upgrade fails:
- The old database is still available at /var/vcap/store/postgres
- The new database is at /var/vcap/store/postgres-9.4.2
- A marker file is kept at /var/vcap/store/FLAG_POSTGRES_UPGRADE to
prevent the upgrade from happening again.
- pg_upgrade logs that may have details of why the migration failed can
be found in/home/vcap/
To attempt the upgrade again, you should remove
/var/vcap/store/postgres-9.4.2 and/var/vcap/store/FLAG_POSTGRES_UPGRADE
To rollback to a previous release, you should remove
/var/vcap/store/postgres-9.4.2 and/var/vcap/store/FLAG_POSTGRES_UPGRADE.
The previous release has no knowledge of these files, but they will
conflict if you later try the upgrade again.
Post upgrade, both old and new databases are kept. The old database moved to
/var/vcap/store/postgres-previous. The postgres-previous directory will be
kept until the next postgres upgrade is performed in the future. You are
free to remove this if you have verified the new database works and you
want to reclaim the space.
Manifest and Job Spec Changes
- properties.cc.stacks.default lucid64 stack has been removed
- properties.dea_next.stacks.default lucid64 stack has been removed
https://github.com/cloudfoundry/cf-release/releases/tag/v211
- IMPORTANT: This release removes lucid64 stack, please ensure apps are
migrated prior to upgrade
- IMPORTANT: If using the postgres included within cf-release, please
carefully read the note below about the upgrade to postgres 9.4.2
Runtime
- Remove lucid64 stack completely from cf-release details
<https://www.pivotaltracker.com/story/show/95483678>
- Please ensure all your applications have migrated to the cflinuxfs2
stack prior to upgrading to this release
- Once all apps have been migrated to the new stack, Operators will
need to manually delete the lucid64 stack via the cc api using the admin
user.
-
http://apidocs.cloudfoundry.org/211/stacks/delete_a_particular_stack.html
- Upgraded postgres included in cf-release to postgres 9.4.2 details
<https://www.pivotaltracker.com/story/show/77680398>
- See note below about the postgres job upgrade
- [Experimental] Work continues on /v3 and Application Process Types
details <https://www.pivotaltracker.com/epic/show/1334418>
- [Experimental] Work continues on Route API details
<https://www.pivotaltracker.com/epic/show/1590160>
- [Experimental] Work continues on Context Path Routes details
<https://www.pivotaltracker.com/epic/show/1808212>
- Work in progress for support of user-provided tags on service
instances details <https://www.pivotaltracker.com/epic/show/1879702>
- cloudfoundry/cf-release #689
<https://github.com/cloudfoundry/cf-release/pull/689>: Fixing failed
cc_ng and cc_ng_worker with NFS details
<https://www.pivotaltracker.com/story/show/95602450>
- Remove default support address for CC details
<https://www.pivotaltracker.com/story/show/92724640>
- increased cloud_controller_ng start timeout to be able to run long
ccdb migrations details
<https://github.com/cloudfoundry/cf-release/commit/ff57572d67c9c0e7e38d9b2298762faba8547727>
- cloudfoundry/cf-release #680
<https://github.com/cloudfoundry/cf-release/pull/680>: staticfile to be
tested before nodejs/ruby buildpacks details
<https://www.pivotaltracker.com/story/show/94594952>
- cloudfoundry/stacks #16
<https://github.com/cloudfoundry/cf-release/pull/16>: Add cmake to
rootfses details <https://www.pivotaltracker.com/story/show/94022672>
- cloudfoundry/stacks #17
<https://github.com/cloudfoundry/cf-release/pull/17>: Add autoconf to
rootfs details <https://www.pivotaltracker.com/story/show/94411176>
- cloudfoundry/cf-release #682
<https://github.com/cloudfoundry/cf-release/pull/682>: Upgrading ruby
buildpack to v1.4.2 details
<https://www.pivotaltracker.com/story/show/95087396>
- cloudfoundry/cf-release #683
<https://github.com/cloudfoundry/cf-release/pull/683>: Upgrading python
buildpack to v1.3.2 details
<https://www.pivotaltracker.com/story/show/95163484>
- Make 'dea_next.stacks' overridable in the manifest. details
<https://www.pivotaltracker.com/story/show/92393276>
- cloudfoundry/cf-release #681
<https://github.com/cloudfoundry/cf-release/pull/681>: Add security
group for cf-mysql subnets on bosh-lite details
<https://www.pivotaltracker.com/story/show/95024316>
- cloudfoundry/dea_ng #164
<https://github.com/cloudfoundry/cf-release/pull/164>: Add warden_handle
method to staging task details
<https://www.pivotaltracker.com/story/show/95427526>
- Use MASQUERADE instead of SNAT for container NAT details
<https://github.com/cloudfoundry/warden/commit/4f1e5c049a12199fdd1f29cde15c9a786bd5fac8>
- Throw better errors for apps stats endpoint details
<https://www.pivotaltracker.com/story/show/93268820>
- Fix buildpack_cache deletion issue details
<https://www.pivotaltracker.com/story/show/95474242>
Loggregator
-
If no Dopplers available in an AZ, Metron will now fail over across AZs.
details <https://www.pivotaltracker.com/story/show/86649938>
-
StatsD support broken out of Metron and into a separate process. New
class of items for adding data into metron/loggregator now known as an
“injectors." Further info to follow on cf-dev.
- details <https://www.pivotaltracker.com/story/show/95065248>.
- repo <https://github.com/cloudfoundry/statsd-injector>.
-
All loggregator metrics now using a Metron /varz shim instead of writing
to a local /varz.
- Most loggregator metrics will have a different prefix as a result.
- All former metrics and new ones are documented - in wiki
<https://github.com/cloudfoundry/loggregator/wiki/Loggregator--varz-metrics-page>
(scroll
right) and in a publicgoogle doc
<https://docs.google.com/spreadsheets/d/176yIaJChXEmvm-CjybmwopdRGQfDGrSzo3J_Mx8mMnk/edit?usp=sharing>
.
- Story details <https://www.pivotaltracker.com/story/show/95539818>.
- Other CF Components to follow; docs to be formalized with
documentation team.
-
NOAA client library fixed Close() issue, independent of CF release.
Change is backward-incompatible.
- details <https://www.pivotaltracker.com/story/show/94103174> | cf-dev
announcement
<http://lists.cloudfoundry.org/pipermail/cf-dev/2015-June/000316.html>
| github diff
<https://github.com/cloudfoundry/noaa/commit/0de0770ca632948b6ae49ab28c1c04e260d31bbb>
-
Removed Dropsonde protocol dependence on gogoproto for non-go builds.
details <https://www.pivotaltracker.com/story/show/94688854>
-
Increase doppler marshal/unmarshal efficiency to compensate for message
size changes.details <https://www.pivotaltracker.com/story/show/93439456>
-
[Bug Fix] Syslog drain binder is no longer leaking connections to
cloud_controller. details
<https://www.pivotaltracker.com/story/show/93932106>
-
[Bug Fix] LoggregatorClientPool no longer leaking clients to
non-existant dopplers. details
<https://www.pivotaltracker.com/story/show/95008094>
Used Configuration
- BOSH Version: 152
- Stemcell Version: 2969
- CC Api Version: 2.28.0
Commit summary
<http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-211-whats-in-the-deploy.html>
Compatible Diego Version
- final release 0.1281.0 commit
<https://github.com/cloudfoundry-incubator/diego-release/commit/fc114972868c3adc544f22860ef77593cb624e64>
Postgres Job Upgrade
The Postgres Job will upgrade the postgres database to version 9.4.2.
Postgres will be unavailable during this upgrade.
A copy of the database is made for the upgrade, you may need to adjust the
persistent disk capacity of the postgres job.
If the upgrade fails:
- The old database is still available at /var/vcap/store/postgres
- The new database is at /var/vcap/store/postgres-9.4.2
- A marker file is kept at /var/vcap/store/FLAG_POSTGRES_UPGRADE to
prevent the upgrade from happening again.
- pg_upgrade logs that may have details of why the migration failed can
be found in/home/vcap/
To attempt the upgrade again, you should remove
/var/vcap/store/postgres-9.4.2 and/var/vcap/store/FLAG_POSTGRES_UPGRADE
To rollback to a previous release, you should remove
/var/vcap/store/postgres-9.4.2 and/var/vcap/store/FLAG_POSTGRES_UPGRADE.
The previous release has no knowledge of these files, but they will
conflict if you later try the upgrade again.
Post upgrade, both old and new databases are kept. The old database moved to
/var/vcap/store/postgres-previous. The postgres-previous directory will be
kept until the next postgres upgrade is performed in the future. You are
free to remove this if you have verified the new database works and you
want to reclaim the space.
Manifest and Job Spec Changes
- properties.cc.stacks.default lucid64 stack has been removed
- properties.dea_next.stacks.default lucid64 stack has been removed
https://github.com/cloudfoundry/cf-release/releases/tag/v211
Re: cf-dev Digest, Vol 3, Issue 18
Shannon Coen
Hello Supraja,
Did you delete the service broker with `cf delete-service-broker`?
Then you registered the broker again with `cf create-service-broker`?
Then you tried to make the service plans public with `cf
enable-service-access testService`?
When you ran this last command, are you sure you were an admin? You will
receive the error "Service offering testService not found" if you are not
authenticated as an admin user.
Best,
Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.
toggle quoted message
Show quoted text
Did you delete the service broker with `cf delete-service-broker`?
Then you registered the broker again with `cf create-service-broker`?
Then you tried to make the service plans public with `cf
enable-service-access testService`?
When you ran this last command, are you sure you were an admin? You will
receive the error "Service offering testService not found" if you are not
authenticated as an admin user.
Best,
Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.
Date: Wed, 3 Jun 2015 17:18:43 -0700
From: Supraja Yasoda <ykmsupraja(a)gmail.com>
To: cf-dev(a)lists.cloudfoundry.org
Subject: [cf-dev] Service offering testService not found
Message-ID:
<
CADnEmc45Wckbup+GNBtCm_TRjx-Y7jwXBFk2YkqQZ8YO5RxRLw(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Hi,
I have deleted service broker by removing service instances. I created
again but now I am unable to enable-service-access to Service. I get error
"Service offering testService not found".
When I do get I see catalog gets service Id, name under service definition.
Could someone suggest one the same.
--
*Regards,*
-------------- next part --------------
Re: Log connections from security groups - bosh lite
James Bayer
i seem to remember something about app security group logging having an
issue with bosh-lite that isn't present when you have a DEA in a VM. i
remember something about that. i'll see if dieu remembers.
On Fri, Jun 5, 2015 at 1:06 PM, Michael <
michael.grifalconi(a)studenti.unimi.it> wrote:
--
Thank you,
James Bayer
issue with bosh-lite that isn't present when you have a DEA in a VM. i
remember something about that. i'll see if dieu remembers.
On Fri, Jun 5, 2015 at 1:06 PM, Michael <
michael.grifalconi(a)studenti.unimi.it> wrote:
Hello,
as you suggested, I looked deeper in this matter, and I can see that on
the DEA VM:
I get the right iptables rules, but I still can not see the logs on
/var/log/messages
[Im using bosh-lite, latest stemcell, CF version 207]
Do you know what should I do to allow this information to be logged?
ref:https://www.pivotaltracker.com/n/projects/966314/stories/90078842
Thank you!
Best regards,
Michael
****************
Per destinare il 5x1000 all'Universita' degli Studi di Milano: indicare
nella dichiarazione dei redditi il codice fiscale 80012650158.
http://www.unimi.it/13084.htm?utm_source=firmaMail&utm_medium=email&utm_content=linkFirmaEmail&utm_campaign=5xmille
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
--
Thank you,
James Bayer
Log connections from security groups - bosh lite
Michael Grifalconi <michael.grifalconi@...>
Hello,
as you suggested, I looked deeper in this matter, and I can see that on the DEA VM:
I get the right iptables rules, but I still can not see the logs on /var/log/messages
[Im using bosh-lite, latest stemcell, CF version 207]
Do you know what should I do to allow this information to be logged?
ref:https://www.pivotaltracker.com/n/projects/966314/stories/90078842
Thank you!
Best regards,
Michael
****************
Per destinare il 5x1000 all'Universita' degli Studi di Milano: indicare nella dichiarazione dei redditi il codice fiscale 80012650158.
http://www.unimi.it/13084.htm?utm_source=firmaMail&utm_medium=email&utm_content=linkFirmaEmail&utm_campaign=5xmille
as you suggested, I looked deeper in this matter, and I can see that on the DEA VM:
I get the right iptables rules, but I still can not see the logs on /var/log/messages
[Im using bosh-lite, latest stemcell, CF version 207]
Do you know what should I do to allow this information to be logged?
ref:https://www.pivotaltracker.com/n/projects/966314/stories/90078842
Thank you!
Best regards,
Michael
****************
Per destinare il 5x1000 all'Universita' degli Studi di Milano: indicare nella dichiarazione dei redditi il codice fiscale 80012650158.
http://www.unimi.it/13084.htm?utm_source=firmaMail&utm_medium=email&utm_content=linkFirmaEmail&utm_campaign=5xmille
Re: Runtime PMC - 2015-06-02 notes
Dieu Cao <dcao@...>
Copying contents of notes here.
----
Runtime PMC Meeting 2015-06-02
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#agenda>
Agenda
1. Proposed Runtime refactor
2. Current Backlog and Priorities
3. PMC Lifecycle Activities
4. Open Discussion
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#attendees>
Attendees
- Chip Childers, Cloud Foundry Foundation
- Michael Fraenkel, IBM
- Matt Sykes, IBM
- Steve Winkler, GE
- Onsi Fakhouri, Pivotal
- Erik Jasiak, Pivotal
- Sree Tummidi, Pivotal
- Eric Malm, Pivotal
- Marco Nicosia, Pivotal
- James Bayer, Pivotal
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#proposed-runtime-refactor>Proposed
Runtime refactor
[image: runtime-refactor]
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/runtime-refactor.png>
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#current-backlogs-and-priorities>Current
Backlogs and Priorities
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#lamb>
LAMB
- Accelerating effort to move components away from varz.
- Plan to work with individual teams to document what the varz metrics
actually mean
- backwards incompatible change in NOAA library - to be documented and
communicated to cf-dev
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#diego>
Diego
- Nearing completion of 50 cell experiment
- This week working on 100 cells
- starting semantic versioning for Diego
- Some security stories
- progressing on ssh track ** reimplementing scp inside of our daemon **
xtp with cli for ssh plugin ** making sure correct policy is in place for
ssh access, configurable at deployment, space, and app level ** Some
discussion around policy for removing instances that have been modified
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#uaa>
UAA
- UAA 2.3.1 released - includes a bug fix that IBM was interested in
- Revokable token strategy - research and POCs
- Password policy for multi-tenant zones
- Add policy around lockout and expiry
- Planning inception next week around token revocation and handling of
saml claims
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#lattice>
Lattice
- Dropping this week 0.2.5 release
- Support for new version terraform
- Community contributed openstack support
- cli to use Diego’s task functionality, UX is terrible, but
functionality is nice.
- Community requested, enable monitoring of a url, health status of a url
- Massive document scrubbing using github to update the documents
- looking at implementing condenser, creating droplets for lattice
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#runtime>
Runtime
- Breaking out the GoRouter into a separate CF Routing team, inception
on Route Services on Monday.
- Making good progress on transitioning CI to concourse. Can see our
progress here <https://concourse.runtime-ci.cf-app.com/>
- Making good progress on Routing API work.
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#pmc-lifecycle-activities>PMC
Lifecycle Activities
- Proposed moving the work on GoRouter into a separate CF Routing team.
No objections raised to creating the new team
- Proposed Dieu Cao to lead the new CF Routing team. No objections
raised to Dieu leading the new team
- Tracker for CF Routing
<https://www.pivotaltracker.com/n/projects/1358110>
- Notes from Route Services inception
<https://docs.google.com/a/pivotal.io/document/d/1XYHuOLISd6zIjTJClJpJYz2m_76RFECYUbyuPk7JoqY/edit?usp=sharing>
toggle quoted message
Show quoted text
----
Runtime PMC Meeting 2015-06-02
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#agenda>
Agenda
1. Proposed Runtime refactor
2. Current Backlog and Priorities
3. PMC Lifecycle Activities
4. Open Discussion
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#attendees>
Attendees
- Chip Childers, Cloud Foundry Foundation
- Michael Fraenkel, IBM
- Matt Sykes, IBM
- Steve Winkler, GE
- Onsi Fakhouri, Pivotal
- Erik Jasiak, Pivotal
- Sree Tummidi, Pivotal
- Eric Malm, Pivotal
- Marco Nicosia, Pivotal
- James Bayer, Pivotal
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#proposed-runtime-refactor>Proposed
Runtime refactor
[image: runtime-refactor]
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/runtime-refactor.png>
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#current-backlogs-and-priorities>Current
Backlogs and Priorities
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#lamb>
LAMB
- Accelerating effort to move components away from varz.
- Plan to work with individual teams to document what the varz metrics
actually mean
- backwards incompatible change in NOAA library - to be documented and
communicated to cf-dev
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#diego>
Diego
- Nearing completion of 50 cell experiment
- This week working on 100 cells
- starting semantic versioning for Diego
- Some security stories
- progressing on ssh track ** reimplementing scp inside of our daemon **
xtp with cli for ssh plugin ** making sure correct policy is in place for
ssh access, configurable at deployment, space, and app level ** Some
discussion around policy for removing instances that have been modified
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#uaa>
UAA
- UAA 2.3.1 released - includes a bug fix that IBM was interested in
- Revokable token strategy - research and POCs
- Password policy for multi-tenant zones
- Add policy around lockout and expiry
- Planning inception next week around token revocation and handling of
saml claims
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#lattice>
Lattice
- Dropping this week 0.2.5 release
- Support for new version terraform
- Community contributed openstack support
- cli to use Diego’s task functionality, UX is terrible, but
functionality is nice.
- Community requested, enable monitoring of a url, health status of a url
- Massive document scrubbing using github to update the documents
- looking at implementing condenser, creating droplets for lattice
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#runtime>
Runtime
- Breaking out the GoRouter into a separate CF Routing team, inception
on Route Services on Monday.
- Making good progress on transitioning CI to concourse. Can see our
progress here <https://concourse.runtime-ci.cf-app.com/>
- Making good progress on Routing API work.
<https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md#pmc-lifecycle-activities>PMC
Lifecycle Activities
- Proposed moving the work on GoRouter into a separate CF Routing team.
No objections raised to creating the new team
- Proposed Dieu Cao to lead the new CF Routing team. No objections
raised to Dieu leading the new team
- Tracker for CF Routing
<https://www.pivotaltracker.com/n/projects/1358110>
- Notes from Route Services inception
<https://docs.google.com/a/pivotal.io/document/d/1XYHuOLISd6zIjTJClJpJYz2m_76RFECYUbyuPk7JoqY/edit?usp=sharing>
On Fri, Jun 5, 2015 at 12:55 AM, Dieu Cao <dcao(a)pivotal.io> wrote:
Hi,
We had a meeting for the Runtime PMC earlier this week.
Notes available here:
https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-06-02-runtime.md
-Dieu
Re: Hm9000 Build error
Armin Ranjbar <zoup@...>
Fix Confirmed, for the record, it was the issue when running ./update and
failure to retrieve HM9000 git submodules.
---
Armin ranjbar
toggle quoted message
Show quoted text
failure to retrieve HM9000 git submodules.
---
Armin ranjbar
On Fri, Jun 5, 2015 at 6:51 PM, Armin Ranjbar <zoup(a)zoup.org> wrote:
Replying myself :I
i think it was due the fact that something was messed up during updating
git sub modules before creating release, recreating to make sure it's ok.
---
Armin ranjbar
On Fri, Jun 5, 2015 at 6:28 PM, Armin Ranjbar <zoup(a)zoup.org> wrote:Hello,
when trying to deploy CF, i get this error during build process of HM9000.
CF RELEASE : 210+dev.2 , commit hash: c6f46acd
bosh-openstack-kvm-ubuntu-trusty-go_agent v2978
Started compiling packages >
hm9000/b27306493cef0f36b94eacb6821ce0e53fc386d6. Failed: Action Failed
get_task: Task 57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling
package hm9000: Running packaging script: Command exited with 1; Stdout: ,
Stderr: ++ readlink -nf /var/vcap/packages/golang1.4
+ export
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+ export
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+ export GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ go install github.com/cloudfoundry/hm9000
hm9000/src/
github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2:
cannot find package "github.com/cloudfoundry/yagnats" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/cloudfoundry/yagnats (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable
Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/codegangsta/cli
hm9000/src/
github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2:
cannot find package "github.com/coreos/go-etcd/etcd" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/coreos/go-etcd/etcd (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/goji/httpauth
hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/nu7hatch/gouuid
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/onsi/gomega
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/pivotal-golang/lager
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2:
cannot find package "github.com/tedsuo/ifrit/ginkgomon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/ginkgomon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit/ginkgomon (from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2:
cannot find package "github.com/tedsuo/ifrit/grouper" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/grouper (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot
find package "github.com/tedsuo/ifrit/http_server" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/http_server (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit/http_server (from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2:
cannot find package "github.com/tedsuo/ifrit/restart" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/restart (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot
find package "github.com/tedsuo/ifrit/sigmon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/sigmon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/rata
(00:05:56)
Error 450001: Action Failed get_task: Task
57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling package hm9000:
Running packaging script: Command exited with 1; Stdout: , Stderr: ++
readlink -nf /var/vcap/packages/golang1.4
+ export
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+ export
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+ export GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ go install github.com/cloudfoundry/hm9000
hm9000/src/
github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2:
cannot find package "github.com/cloudfoundry/yagnats" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/cloudfoundry/yagnats (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable
Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/codegangsta/cli
hm9000/src/
github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2:
cannot find package "github.com/coreos/go-etcd/etcd" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/coreos/go-etcd/etcd (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/goji/httpauth
hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/nu7hatch/gouuid
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/onsi/gomega
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/pivotal-golang/lager
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2:
cannot find package "github.com/tedsuo/ifrit/ginkgomon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/ginkgomon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit/ginkgomon (from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2:
cannot find package "github.com/tedsuo/ifrit/grouper" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/grouper (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot
find package "github.com/tedsuo/ifrit/http_server" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/http_server (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit/http_server (from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2:
cannot find package "github.com/tedsuo/ifrit/restart" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/restart (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot
find package "github.com/tedsuo/ifrit/sigmon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/sigmon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/rata
---
Armin ranjbar
Re: Hm9000 Build error
Armin Ranjbar <zoup@...>
Replying myself :I
i think it was due the fact that something was messed up during updating
git sub modules before creating release, recreating to make sure it's ok.
---
Armin ranjbar
toggle quoted message
Show quoted text
i think it was due the fact that something was messed up during updating
git sub modules before creating release, recreating to make sure it's ok.
---
Armin ranjbar
On Fri, Jun 5, 2015 at 6:28 PM, Armin Ranjbar <zoup(a)zoup.org> wrote:
Hello,
when trying to deploy CF, i get this error during build process of HM9000.
CF RELEASE : 210+dev.2 , commit hash: c6f46acd
bosh-openstack-kvm-ubuntu-trusty-go_agent v2978
Started compiling packages >
hm9000/b27306493cef0f36b94eacb6821ce0e53fc386d6. Failed: Action Failed
get_task: Task 57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling
package hm9000: Running packaging script: Command exited with 1; Stdout: ,
Stderr: ++ readlink -nf /var/vcap/packages/golang1.4
+ export
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+ export
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+ export GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ go install github.com/cloudfoundry/hm9000
hm9000/src/
github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2:
cannot find package "github.com/cloudfoundry/yagnats" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/cloudfoundry/yagnats (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable Go
source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/codegangsta/cli
hm9000/src/
github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2:
cannot find package "github.com/coreos/go-etcd/etcd" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/coreos/go-etcd/etcd (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/goji/httpauth
hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/nu7hatch/gouuid
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/onsi/gomega
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/pivotal-golang/lager
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2: cannot
find package "github.com/tedsuo/ifrit/ginkgomon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/ginkgomon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/ginkgomon
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2:
cannot find package "github.com/tedsuo/ifrit/grouper" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/grouper (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot
find package "github.com/tedsuo/ifrit/http_server" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/http_server (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit/http_server (from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2:
cannot find package "github.com/tedsuo/ifrit/restart" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/restart (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot
find package "github.com/tedsuo/ifrit/sigmon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/sigmon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/rata
(00:05:56)
Error 450001: Action Failed get_task: Task
57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling package hm9000:
Running packaging script: Command exited with 1; Stdout: , Stderr: ++
readlink -nf /var/vcap/packages/golang1.4
+ export
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+ export
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+ export GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ go install github.com/cloudfoundry/hm9000
hm9000/src/
github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2:
cannot find package "github.com/cloudfoundry/yagnats" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/cloudfoundry/yagnats (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable Go
source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/codegangsta/cli
hm9000/src/
github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2:
cannot find package "github.com/coreos/go-etcd/etcd" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/coreos/go-etcd/etcd (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/goji/httpauth
hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/nu7hatch/gouuid
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/onsi/gomega
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/pivotal-golang/lager
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit
hm9000/src/
github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2: cannot
find package "github.com/tedsuo/ifrit/ginkgomon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/ginkgomon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/ginkgomon
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2:
cannot find package "github.com/tedsuo/ifrit/grouper" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/grouper (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot
find package "github.com/tedsuo/ifrit/http_server" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/http_server (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit/http_server (from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2:
cannot find package "github.com/tedsuo/ifrit/restart" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/restart (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot
find package "github.com/tedsuo/ifrit/sigmon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/sigmon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/rata
---
Armin ranjbar
Hm9000 Build error
Armin Ranjbar <zoup@...>
Hello,
when trying to deploy CF, i get this error during build process of HM9000.
CF RELEASE : 210+dev.2 , commit hash: c6f46acd
bosh-openstack-kvm-ubuntu-trusty-go_agent v2978
Started compiling packages >
hm9000/b27306493cef0f36b94eacb6821ce0e53fc386d6. Failed: Action Failed
get_task: Task 57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling
package hm9000: Running packaging script: Command exited with 1; Stdout: ,
Stderr: ++ readlink -nf /var/vcap/packages/golang1.4
+ export
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+ export
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+ export GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ go install github.com/cloudfoundry/hm9000
hm9000/src/
github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2:
cannot find package "github.com/cloudfoundry/yagnats" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/cloudfoundry/yagnats (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable Go
source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/codegangsta/cli
hm9000/src/
github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2:
cannot find package "github.com/coreos/go-etcd/etcd" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/coreos/go-etcd/etcd (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/goji/httpauth
hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/nu7hatch/gouuid
hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/onsi/gomega
hm9000/src/github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/pivotal-golang/lager
hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit
hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2:
cannot find package "github.com/tedsuo/ifrit/ginkgomon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/ginkgomon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/ginkgomon
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2:
cannot find package "github.com/tedsuo/ifrit/grouper" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/grouper (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot find
package "github.com/tedsuo/ifrit/http_server" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/http_server (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/http_server
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2:
cannot find package "github.com/tedsuo/ifrit/restart" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/restart (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot find
package "github.com/tedsuo/ifrit/sigmon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/sigmon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/rata
(00:05:56)
Error 450001: Action Failed get_task: Task
57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling package hm9000:
Running packaging script: Command exited with 1; Stdout: , Stderr: ++
readlink -nf /var/vcap/packages/golang1.4
+ export
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+ export
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+ export GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ go install github.com/cloudfoundry/hm9000
hm9000/src/
github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2:
cannot find package "github.com/cloudfoundry/yagnats" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/cloudfoundry/yagnats (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable Go
source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/codegangsta/cli
hm9000/src/
github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2:
cannot find package "github.com/coreos/go-etcd/etcd" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/coreos/go-etcd/etcd (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/goji/httpauth
hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/nu7hatch/gouuid
hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/onsi/gomega
hm9000/src/github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/pivotal-golang/lager
hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit
hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2:
cannot find package "github.com/tedsuo/ifrit/ginkgomon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/ginkgomon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/ginkgomon
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2:
cannot find package "github.com/tedsuo/ifrit/grouper" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/grouper (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot find
package "github.com/tedsuo/ifrit/http_server" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/http_server (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/http_server
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2:
cannot find package "github.com/tedsuo/ifrit/restart" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/restart (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot find
package "github.com/tedsuo/ifrit/sigmon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/sigmon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/rata
---
Armin ranjbar
when trying to deploy CF, i get this error during build process of HM9000.
CF RELEASE : 210+dev.2 , commit hash: c6f46acd
bosh-openstack-kvm-ubuntu-trusty-go_agent v2978
Started compiling packages >
hm9000/b27306493cef0f36b94eacb6821ce0e53fc386d6. Failed: Action Failed
get_task: Task 57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling
package hm9000: Running packaging script: Command exited with 1; Stdout: ,
Stderr: ++ readlink -nf /var/vcap/packages/golang1.4
+ export
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+ export
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+ export GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ go install github.com/cloudfoundry/hm9000
hm9000/src/
github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2:
cannot find package "github.com/cloudfoundry/yagnats" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/cloudfoundry/yagnats (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable Go
source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/codegangsta/cli
hm9000/src/
github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2:
cannot find package "github.com/coreos/go-etcd/etcd" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/coreos/go-etcd/etcd (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/goji/httpauth
hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/nu7hatch/gouuid
hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/onsi/gomega
hm9000/src/github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/pivotal-golang/lager
hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit
hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2:
cannot find package "github.com/tedsuo/ifrit/ginkgomon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/ginkgomon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/ginkgomon
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2:
cannot find package "github.com/tedsuo/ifrit/grouper" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/grouper (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot find
package "github.com/tedsuo/ifrit/http_server" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/http_server (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/http_server
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2:
cannot find package "github.com/tedsuo/ifrit/restart" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/restart (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot find
package "github.com/tedsuo/ifrit/sigmon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/sigmon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/rata
(00:05:56)
Error 450001: Action Failed get_task: Task
57eec963-9620-42a8-43dd-3b193d7b928f result: Compiling package hm9000:
Running packaging script: Command exited with 1; Stdout: , Stderr: ++
readlink -nf /var/vcap/packages/golang1.4
+ export
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+
GOROOT=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39
+ export
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+
PATH=/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
+ export GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ GOPATH=/var/vcap/data/compile/hm9000/hm9000
+ go install github.com/cloudfoundry/hm9000
hm9000/src/
github.com/cloudfoundry/hm9000/actualstatelistener/actual_state_listener.go:16:2:
cannot find package "github.com/cloudfoundry/yagnats" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/cloudfoundry/yagnats (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/cloudfoundry/yagnats
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm9000.go:11:2: no buildable Go
source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/codegangsta/cli
hm9000/src/
github.com/cloudfoundry/storeadapter/etcdstoreadapter/etcd_store_adapter.go:10:2:
cannot find package "github.com/coreos/go-etcd/etcd" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/coreos/go-etcd/etcd (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/coreos/go-etcd/etcd
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry/hm9000/apiserver/handlers/basic_auth_handler.go:6:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/goji/httpauth
hm9000/src/github.com/cloudfoundry/gunk/diegonats/fake_nats_client.go:8:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/nu7hatch/gouuid
hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:10:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/onsi/gomega
hm9000/src/github.com/cloudfoundry/gunk/diegonats/nats_client_runner.go:10:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/pivotal-golang/lager
hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:11:2:
no buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/ifrit
hm9000/src/github.com/cloudfoundry/gunk/diegonats/gnatsd_test_runner.go:12:2:
cannot find package "github.com/tedsuo/ifrit/ginkgomon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/ginkgomon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/ginkgomon
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:10:2:
cannot find package "github.com/tedsuo/ifrit/grouper" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/grouper (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/grouper
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:16:2: cannot find
package "github.com/tedsuo/ifrit/http_server" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/http_server (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/http_server
(from $GOPATH)
hm9000/src/
github.com/cloudfoundry-incubator/natbeat/background_heartbeat.go:11:2:
cannot find package "github.com/tedsuo/ifrit/restart" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/restart (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/restart
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/hm/serve_api.go:17:2: cannot find
package "github.com/tedsuo/ifrit/sigmon" in any of:
/var/vcap/data/packages/golang1.4/f57ddbc8d55d7a0f08775bf76bb6a27dc98c7ea7.1-99cad68115a543a556da19ff44cca94ba0ff7d39/src/
github.com/tedsuo/ifrit/sigmon (from $GOROOT)
/var/vcap/data/compile/hm9000/hm9000/src/github.com/tedsuo/ifrit/sigmon
(from $GOPATH)
hm9000/src/github.com/cloudfoundry/hm9000/apiserver/routes.go:3:8: no
buildable Go source files in /var/vcap/data/compile/hm9000/hm9000/src/
github.com/tedsuo/rata
---
Armin ranjbar
Re: Staging error: no available stagers (status code: 400, error code: 170001)
Takeshi Morikawa
I am sorry that I was no help to you
2015-06-05 13:12 GMT+09:00 Guangcai Wang <guangcai.wang(a)gmail.com>:
toggle quoted message
Show quoted text
2015-06-05 13:12 GMT+09:00 Guangcai Wang <guangcai.wang(a)gmail.com>:
Finally, it works after I changed the configuration related to memory and
disk for dea.
< disk_mb: 2048
---disk_mb: 100001025c1029
< memory_overcommit_factor: 3
---memory_overcommit_factor: 81028c1032
< staging_disk_limit_mb: 6144
---staging_disk_limit_mb: 4096[#1] Received on [staging.advertise] :
'{"id":"0-2b2f83b4755749aba3c31cc58a69a306","stacks":["lucid64","cflinuxfs2"],"available_memory":8192,"available_disk":20000}'
[#2] Received on [staging.advertise] :
'{"id":"0-2b2f83b4755749aba3c31cc58a69a306","stacks":["lucid64","cflinuxfs2"],"available_memory":8192,"available_disk":20000}'
After I deployed a simple php-demo application, they are changed to
[#8] Received on [staging.advertise] :
'{"id":"0-2b2f83b4755749aba3c31cc58a69a306","stacks":["lucid64","cflinuxfs2"],"available_memory":8064,"available_disk":18976}'
[#9] Received on [staging.advertise] :
'{"id":"0-2b2f83b4755749aba3c31cc58a69a306","stacks":["lucid64","cflinuxfs2"],"available_memory":8064,"available_disk":18976}'
However, I still cannot understand why my previous configuration led to
"Staging error: no available stagers" as the nats messages below said it
has enough resource. My php application is only consuming 128M memory and
1G disk. Who can share some insights?
[#41] Received on [staging.advertise] :
'{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}'
ubuntu(a)boshclivm:~/apps/cf-php-demo$ cat manifest.yml
---
applications:
- name: cf-php-demo
memory: 128M
instances: 1
host: cf-php-demo
path: .
ubuntu(a)boshclivm:~/apps/cf-php-demo$ cf apps
name requested state instances memory disk urls
cf-php-demo started 1/1 128M 1G
cf-php-demo.runmyapp.io
On Thu, Jun 4, 2015 at 5:08 PM, Guangcai Wang <guangcai.wang(a)gmail.com>
wrote:I got the nats message on 'staging.advertise'. It has the enough_______________________________________________
resource. but it seems not correct. And it also cannot explain the error -
Server error, status code: 400, error code: 170001, message: Staging error:
no available stagers.
[#41] Received on [staging.advertise] :
'{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}'
[#42] Received on [staging.advertise] :
'{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}'
[#43] Received on [staging.advertise] :
'{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}'
[#44] Received on [staging.advertise] :
'{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}'
+------------------------------------+---------+---------------+---------------+
| Job/index | State | Resource Pool |
IPs |
+------------------------------------+---------+---------------+---------------+
| api_worker_z1/0 | running | small_z1 |
100.64.0.23 |
| api_z1/0 | running | medium_z1 |
100.64.0.21 |
| clock_global/0 | running | medium_z1 |
100.64.0.22 |
| etcd_z1/0 | running | medium_z1 |
100.64.1.8 |
| ha_proxy_z1/0 | running | router_z1 |
100.64.1.0 |
| | | |
137.172.74.90 |
| hm9000_z1/0 | running | medium_z1 |
100.64.0.24 |
| loggregator_trafficcontroller_z1/0 | running | small_z1 |
100.64.0.27 |
| loggregator_z1/0 | running | medium_z1 |
100.64.0.26 |
| login_z1/0 | running | medium_z1 |
100.64.0.20 |
| nats_z1/0 | running | medium_z1 |
100.64.1.2 |
| nfs_z1/0 | running | medium_z1 |
100.64.1.3 |
| postgres_z1/0 | running | medium_z1 |
100.64.1.4 |
| router_z1/0 | running | router_z1 |
100.64.1.5 |
| runner_z1/0 | running | runner_z1 |
100.64.0.25 |
| stats_z1/0 | running | small_z1 |
100.64.0.18 |
| uaa_z1/0 | running | medium_z1 |
100.64.0.19 |
+------------------------------------+---------+---------------+---------------+
- 100.64.0.25
m1.large | 8GB RAM | 4 VCPU | 20.0GB Disk
92cf66ec-f2e1-4505-bd25-28c02e991535 | m1.large | 8192 |
20 | 20 | | 4 | 1.0 | True
On Thu, Jun 4, 2015 at 11:57 AM, Guangcai Wang <guangcai.wang(a)gmail.com>
wrote:
From the source code
/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26,
it seems there is no enough for memory or disk.
def stage(&completion_callback)
@stager_id = @stager_pool.find_stager(@app.stack.name,
staging_task_memory_mb, staging_task_disk_mb)
raise Errors::ApiError.new_from_details('StagingError', 'no
available stagers') unless @stager_id
However, this is my first app. It should be light. The DEA is using
m1.large which is
m1.large | 4096 | 20
Anyone has the same error? and any suggestion on manifest or debug tips?
Another question, I want to add more debug information in
cloud_controller_ng.log. I tried to add some code in
/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb,
but it did not show in the log. How to do?
On Thu, Jun 4, 2015 at 10:14 AM, Guangcai Wang <guangcai.wang(a)gmail.com>
wrote:attached the deployment manifest. This is generated by spiff and then I
modified it.
On Thu, Jun 4, 2015 at 12:47 AM, Takeshi Morikawa <moog0814(a)gmail.com>
wrote:Please check the 'staging.advertise' of nats message
https://github.com/cloudfoundry/dea_ng#staging
sample command:
bundle exec nats-sub -s
nats://[nats.user]:[nats.password]@[nats_ipaddress]:[nats.port]
'staging.advertise'
I have one additional request
Can you share your bosh deployment manifest?
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Re: Announcing Experimental support for Asynchronous Service Operations
Duncan Johnston-Watt <duncan.johnstonwatt@...>
+1
--
Duncan Johnston-Watt
CEO | Cloudsoft Corporation
+44 777 190 2653 | @duncanjw
Sent from my iPhone
toggle quoted message
Show quoted text
--
Duncan Johnston-Watt
CEO | Cloudsoft Corporation
+44 777 190 2653 | @duncanjw
Sent from my iPhone
On 4 Jun 2015, at 01:05, Onsi Fakhouri <ofakhouri(a)pivotal.io> wrote:
Well done Services API! This is an awesome milestone!On Wed, Jun 3, 2015 at 5:04 PM, Chip Childers <cchilders(a)cloudfoundry.org> wrote:_______________________________________________
Awesome news! Long time coming, and it opens up a whole world of additional capabilities for users.
Nice work everyone!On Jun 4, 2015, at 9:00 AM, Shannon Coen <scoen(a)pivotal.io> wrote:_______________________________________________
On behalf of the Services API team, including Dojo participants from IBM and SAP, I'm pleased to announce experimental availability and published documentation for this much-anticipated feature.
As of cf-release v208 and CLI v6.11.1, Cloud Foundry now supports an enhanced service broker integration in support of long-running provisioning, update, and delete operations. This significantly broadens the supported use cases for Cloud Foundry Marketplace Services, and I can't wait to hear what creative things the ecosystem does with it. Provision VMs, orchestrate clusters, install software, move data... yes, your broker can even open support tickets to have those things done manually!
This feature is currently considered experimental, as we'd like you all to review our docs, try out the feature, and give us feedback. We very interested to hear about any confusion in the docs or the UX, and any sticky issues you encounter in implementation. Our goal is for our docs enable a painless, intuitive (can we hope for joyful?) implementation experience.
We have not bumped the broker API yet for this feature. You'll notice that our documentation for the feature is separate from the stable API docs at this point. Once we're confident in the design (we're relying on your feedback!), we'll bump the broker API version, move the docs for asynchronous operations into the stable docs, AND implement support for asynchronous bind/create-key and unbind/delete-key.
Documentation:
- http://docs.cloudfoundry.org/services/asynchronous-operations.html
- http://docs.cloudfoundry.org/services/api.html
Example broker for AWS (contributed by IBM):
- http://docs.cloudfoundry.org/services/examples.html
- https://github.com/cloudfoundry-samples/go_service_broker
Demo of the feature presented at CF Summit 2015:
- https://youtu.be/Ij5KSKrAq9Q
tl;dr
Cloud Foundry expects broker responses within 60 seconds. Now a broker can return an immediate response indicating that a provision, update, or delete operation is in progress. Cloud Foundry then returns a similar response to the client, and begins polling the broker for the status of the operation. Users, via API clients, can discover the status of the operation ("in progress", "succeeded", or "failed"), and brokers can provide user-facing messages in response to each poll which are exposed to users (e.g. "VMs provisioned, installing software, 30% complete").
Thank you,
Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
--
Cloudsoft Corporation Limited, Registered in Scotland No: SC349230.
Registered Office: 13 Dryden Place, Edinburgh, EH9 1RP
This e-mail message is confidential and for use by the addressee only. If
the message is received by anyone other than the addressee, please return
the message to the sender by replying to it and then delete the message
from your computer. Internet e-mails are not necessarily secure. Cloudsoft
Corporation Limited does not accept responsibility for changes made to this
message after it was sent.
Whilst all reasonable care has been taken to avoid the transmission of
viruses, it is the responsibility of the recipient to ensure that the
onward transmission, opening or use of this message and any attachments
will not adversely affect its systems or data. No responsibility is
accepted by Cloudsoft Corporation Limited in this regard and the recipient
should carry out such virus and other checks as it considers appropriate.
Cloudsoft Corporation Limited, Registered in Scotland No: SC349230.
Registered Office: 13 Dryden Place, Edinburgh, EH9 1RP
This e-mail message is confidential and for use by the addressee only. If
the message is received by anyone other than the addressee, please return
the message to the sender by replying to it and then delete the message
from your computer. Internet e-mails are not necessarily secure. Cloudsoft
Corporation Limited does not accept responsibility for changes made to this
message after it was sent.
Whilst all reasonable care has been taken to avoid the transmission of
viruses, it is the responsibility of the recipient to ensure that the
onward transmission, opening or use of this message and any attachments
will not adversely affect its systems or data. No responsibility is
accepted by Cloudsoft Corporation Limited in this regard and the recipient
should carry out such virus and other checks as it considers appropriate.