Date   

Re: [cf-bosh] Re: Re: [Metron Agent] failed to generate job templates with metron agent on top of OpenStack Dynamic network

Amit Kumar Gupta
 

Hi Yitao,

I would recommend either using manual networks, or following the GitHub
issue Ben R created above to see if the BOSH team can figure out the root
cause of this issue with dynamic networks.

Best,
Amit

On Tue, Apr 12, 2016 at 6:46 PM, Yitao Jiang <jiangyt.cn(a)gmail.com> wrote:

Thanks Amit.

If using dynamic network, the OpenStack support allocate ip address, using
the ip address to configure metron_agent and launch vm, will it possible
solve the issue?

On Tue, Apr 12, 2016 at 1:24 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

This will not work with dynamic networks. Many jobs in cf-release rely
on data from BOSH to determine their IP so that configuration files can be
rendered up-front by the director rather than at runtime, requiring system
calls to determine IP. metron_agent is one such job, and it tends to be
colocated with each other job (it is what allows all system component logs
to be aggregated through the loggregator system), so this would require all
Cloud Foundry VMs to be on a manual network. You don't need to manually
pick the IPs, you just need to tell BOSH which IPs in the network not to
use and specify these in the "reserved" range.

Since so many different components depend on being able to determine
their IP via BOSH data, there's no quick workaround if you want to stick to
using dynamic networks, but we're aware of this current limitation.

Best,
Amit

On Mon, Apr 11, 2016 at 7:23 PM, Yitao Jiang <jiangyt.cn(a)gmail.com>
wrote:

Is it a bug of CF or Bosh ?

On Fri, Apr 8, 2016 at 12:08 PM, Ben R <vagcom.ben(a)gmail.com> wrote:

I have the same issue. It has to do with every release since bosh 248.
However, dynamic networks with older bosh releases + cf-231/cf-231 work.

This must be a bug.

Ben R


On Thu, Apr 7, 2016 at 8:55 PM, Yitao Jiang <jiangyt.cn(a)gmail.com>
wrote:

Hi,guys

When deploy CF on top of OpenStack with dynamic network, the jobs
failed with metron-agent
Error filling in template 'syslog_forwarder.conf.erb' (line 44:
undefined method `strip' for nil:NilClass)

here's related logs

​D​
etecting deployment changes
----------------------------
Releases
cf
version type changed: String -> Fixnum
- 233
+ 233

Compilation
No changes

Update
± canaries:
- 1
+ 0

Resource pools
No changes

Disk pools
No changes

Networks
dynamic-net
+ name: dynamic-net
subnets
10.0.0.0/24
cloud_properties
+ net_id: 0700ae03-4b38-464e-b40d-0a9c8dd18ff0
+ security_groups: ["Test OS SG_20160128T070152Z"]
+ dns: ["114.114.114.114", "8.8.8.8"]

+ range: 10.0.0.0/24
+ name: Test OS Sub Internal Network_20160128T070152Z

+ type: dynamic


Jobs
stats_z1

± networks:

- {"name"=>"cf1"}

+ {"name"=>"dynamic-net"}


Properties
No changes


Meta
No changes


Please review all changes carefully

Deploying
---------

Are you sure you want to deploy? (type 'yes' to continue): yes


Director task 57
Started preparing deployment > Preparing deployment. Done (00:00:03)


Error 100: Unable to render instance groups for deployment. Errors
are:
- Unable to render jobs for instance group 'stats_z1'. Errors are:

- Unable to render templates for job 'metron_agent'. Errors are:

- Error filling in template 'syslog_forwarder.conf.erb' (line
44: undefined method `strip' for nil:NilClass)

Task 57 error

For a more detailed error report, run: bosh task 57 --debug
as the ip manged by OpenStack, bosh cannot get the actual ip address
of each vm until vm alive, this lead to the generated job spec doesn't
contain ip address infos
so, must i have to configure network type to manual?​

snippets of deployment yml

1001 - name: dynamic-net
1002 subnets:
1003 - cloud_properties:
1004 net_id: 0700ae03-4b38-464e-b40d-0a9c8dd18ff0
1005 security_groups:
1006 - Test OS SG_20160128T070152Z
1007 dns:
1010 - 114.114.114.114
1011 - 8.8.8.8
1012 range: 10.0.0.0/24
1013 name: Test OS Sub Internal Network_20160128T070152Z
1014 type: dynamic

​Rendered job spec

{"deployment"=>"staging-01", "job"=

{"name"=>"stats_z1", "templates"=>[{"name"=>"collector",
"version"=>"6c210292f18d129e9a037fe7053836db2d494344",
"sha1"=>"38927f47b15c2daf6c8a2e7c760e73e5ff90
dfd4", "blobstore_id"=>"23531029-0ee1-4267-8863-b5f931afaecb"},
{"name"=>"metron_agent",
"version"=>"2b80a211127fc642fc8bb0d14d7eb30c37730db3", "sha1"=>"150f2
7445c2ef960951c1f26606525d41ec629b2",
"blobstore_id"=>"e87174dc-f3f7-4768-94cd-74f299813528"}],
"template"=>"collector", "version"=>"6c210292f18d129e9a037fe70
53836db2d494344", "sha1"=>"38927f47b15c2daf6c8a2e7c760e73e5ff90dfd4",
"blobstore_id"=>"23531029-0ee1-4267-8863-b5f931afaecb"}, "index"=>0,
"bootstrap"=>true,
"name"=>"stats_z1", "id"=>"99f349d0-fb5d-4de7-9912-3de5559d2f19",
"az"=>nil,

*"networks"=>{"dynamic-net"=>{"type"=>"dynamic",
"cloud_properties"=>{"net_id"=>"0700ae03-4b38-464e-b40d-0a9c8dd18ff0",
"security_groups"=>["Test OS SG_20160128T070152Z"]},
"dns"=>["114.114.114.114", "8.8.8.8", "10.0.0.13"], "default"=>["dns",
"gateway"],
"dns_record_name"=>"0.stats-z1.dynamic-net.staging-01.microbosh"}}*,
"properties"=>{"collector"=>{"aws"=>{
"access_key_id"=>nil, "secret_access_key"=>nil},
"datadog"=>{"api_key"=>nil, "application_key"=>nil},
"deployment_name"=>nil, "logging_level"=>"info", "interv
als"=>{"discover"=>60, "healthz"=>30, "local_metrics"=>30,
"nats_ping"=>30, "prune"=>300, "varz"=>30}, "use_aws_cloudwatch"=>false,
"use_datadog"=>false, "use
_tsdb"=>false, "opentsdb"=>{"address"=>nil, "port"=>nil},
"use_graphite"=>false, "graphite"=>{"address"=>nil, "port"=>nil},
"memory_threshold"=>800}, "nats"=>
{"machines"=>["10.0.0.127"], "password"=>"NATS_PASSWORD",
"port"=>4222, "user"=>"NATS_USER"},
"syslog_daemon_config"=>{"address"=>nil, "port"=>nil, "transport
"=>"tcp", "fallback_addresses"=>[], "custom_rule"=>"",
"max_message_size"=>"4k"},
"metron_agent"=>{"dropsonde_incoming_port"=>3457, "preferred_protocol"=>"udp
", "tls"=>{"client_cert"=>"", "client_key"=>""}, "debug"=>false,
"zone"=>"z1", "deployment"=>"ya-staging-01",
"tcp"=>{"batching_buffer_bytes"=>10240, "batchin
g_buffer_flush_interval_milliseconds"=>100},
"logrotate"=>{"freq_min"=>5, "rotate"=>7, "size"=>"50M"},
"buffer_size"=>10000, "enable_buffer"=>false}, "metron_
endpoint"=>{"shared_secret"=>"LOGGREGATOR_ENDPOINT_SHARED_SECRET"},
"loggregator"=>{"tls"=>{"ca_cert"=>""}, "dropsonde_incoming_port"=>3457,
"etcd"=>{"machine
s"=>["10.0.0.133"], "maxconcurrentrequests"=>10}}},
"dns_domain_name"=>"microbosh", "links"=>{},
"address"=>"99f349d0-fb5d-4de7-9912-3de5559d2f19.stats-z1.dyn
amic-net.ya-staging-01.microbosh", "persistent_disk"=>0,
"resource_pool"=>"small_z1"}​

--

Regards,

Yitao

--

Regards,

Yitao

--

Regards,

Yitao


Re: Doppler/Firehose - Multiline Log Entry

Mike Youngstrom <youngm@...>
 

Rather than continue this discussion here I've created an issue stating my
case here: https://github.com/cloudfoundry-incubator/executor/issues/17

Mike

On Wed, Apr 13, 2016 at 12:33 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.

Best,
Eric

On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Thanks for the insight Jim. I still think that the Executor is the place
to fix this since multi line logging isn't a Loggregator limitation it is a
log inject limitation which is owned by the Executor. I'll open an issue
with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

That strategy is going to be hard to sell. Diego's Executor takes the
log lines out of Garden and drops them into dropsonde messages. I doubt
they'll think it's a good idea to implement substitution in that
processing. You can certainly ask Eric - he's very aware of the underlying
problem.

After that point, the Loggregator system does it's best to touch
messages as little as possible, and to improve performance and reliability,
we have thinking about the future that will lower the amount of touching
ever further. The next place that log message processing can be done is
either in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I was thinking whoever demarcates and submits the original event to
loggregator. dea_logging_agent and the equivalent in Deigo. Doing it at
that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that transformation
within the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can talk
to them. I'll certainly support you by helping explain the need. I'd think
we want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <youngm(a)gmail.com
wrote:
Jim,

If I submitted a CLI PR to change the cf logs command to
substitute /u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle plug in
<https://github.com/jtuchscherer/nozzle-plugin> for the CLI.
It's attaches to the firehose to display everything, but would be easy to
modify to just look at a single app, and sub out the magic token for
newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi David,

The problem for me is that I'm searching for a solution that can
works for development (though less of a priority cause you can switch
config between dev and cf) and for viewing logs via "cf logs" in addition
to a log aggregator. I had hoped that /u2028 would work for viewing logs
via "cf logs" but it doesn't in bash. I'd need to write a plugin or
something for cf logs and train all my users to use it. Certainly possible
but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:

FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to
denote line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.

If \u2028 doesn't work in your environment; use something
else; eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <youngm(a)gmail.com>
wrote:

Finally got around to testing this. Preliminary testing show
that "\u2028" doesn't function as a new line character in
bash and causes eclipse console to wig out. I don't think "\u2028"
is a viable long term solution. Hope you make progress on a metric format
available to an app in a container. I too would like a tracker link to
such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using unicode
newline character to fool loggregator (which is looking for \n) into
thinking that it isn't a new log event right? Does \u2028 work as a new
line character when tailing logs in the CLI? Anyone tried this unicode new
line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining an
enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing suggested for
Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n, which
Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline in it
is the only way to make gsub work

}

to replace the token with a regular newline again so it
displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But, in my
conversations with the Loggregator team I think we're basically stuck not
sure what the right thing to do is on the client side. How does the client
trigger in loggregator that this is a multi line log message or what is the
right way for loggregator to detect that the client is trying to send a
multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou <
prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry" issue?
How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free: 855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com | twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io
| 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963


Re: Staging and Runtime Hooks Feature Narrative

Ted Young
 

Hi Troy,

I agree that we should bring our buildpack implementation up to date with
Heroku's by adding the .profile runtime hook. Also, I believe Eric Malm had
some thoughts on this hook, and how we could use something like it to make
application start be aware of when an app was in "pre-flight" and delay
starting the health monitor.

However, I am concerned about adding the staging hooks. In particular, what
is being proposed is that we fork the buildpack workflow from Heroku. That
sounds like a pretty big public maneuver, as it could make our buildpacks
no longer compatible with Heroku's. I would suggest we not fork the
buildpack model, and instead consult with Heroku about improving it.

The primary given reason for staging hooks is not that this feature is
necessary for Cloud Foundry users, but that it will help with demos. In
general, I do not think we should make large changes just to help with
demos, there must be broader utility that is commensurate with the size of
the change. I understand you have found utility with these hooks as a
useful shim, and users would like to shim things as well. However, some of
the use cases mentioned for staging hooks look like issues that are not
specific to staging (database migrations, etc) but are instead "camping" on
staging as a convenience. Buildpack staging is very specific to a single
operating system (linux), and a single deployment style (docker apps cannot
use these features). For issues that are not specific to buildpacks, I
would prefer we deliver higher-level, cross-platform solutions.

My suggestion is to unpack the staging issues you raise, and separate out
the issues that are specific to buildpacks. For those issues, we open a
discussion with Heroku. For more general issues, we look for cf-wide
solutions.

Thanks,
Ted

On Tue, Apr 12, 2016 at 8:14 PM, John Shahid <jshahid(a)pivotal.io> wrote:

Hi Troy,

Thanks for putting together this proposal. I added some comments/questions
to the document. I would love your feedback/response on those. Most of the
comments are concerned with the lack of concrete use cases. I think adding
a few examples to each use case will clarify the value added by the hooks.

Cheers,

JS


On Mon, Apr 11, 2016 at 1:04 PM Mike Youngstrom <youngm(a)gmail.com> wrote:

An interesting proposal. Any thoughts about this proposal in relation to
multi-buildpacks [0]? How many of the use cases for this feature go away
in lue of multi-buildpack support? I think it would be interesting to be
able to apply hooks without checking scripts into application (like
multi-bulidpack).

This feature also appears to be somewhat related to [1]. I hope that
someone is overseeing all these newly proposed buildpack features to help
ensure they are coherent.

Mike


[0]
https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/message/H64GGU6Z75CZDXNWC7CKUX64JNPARU6Y/
[1]
https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/thread/GRKFQ2UOQL7APRN6OTGET5HTOJZ7DHRQ/#SEA2RWDCAURSVPIMBXXJMWN7JYFQICL3

On Fri, Apr 8, 2016 at 4:16 PM, Troy Topnik <troy.topnik(a)hpe.com> wrote:

This feature allows developers more control of the staging and
deployment of their application code, without them having to fork existing
buildpacks or create their own.


https://docs.google.com/document/d/1PnTtTLwXOTG7f70ilWGlbTbi1LAXZu9zYnrUVvjr31I/edit

Hooks give developers the ability to optionally:
* run scripts in the staging container before and/or after the
bin/compile scripts executed by the buildpack, and
* run scripts in each app container before the app starts (via .profile
as per the Heroku buildpack API)

A similar feature has been available and used extensively in Stackato
for a few years, and we'd like to contribute this functionality back to
Cloud Foundry.

A proof-of-concept of this feature has already been submitted as a pull
request, and the Feature Narrative addresses many of the questions raised
in the PR discussion:


https://github.com/cloudfoundry-incubator/buildpack_app_lifecycle/pull/13

Please weigh in with comments in the document itself or in this thread.

Thanks,

TT


Re: Staging and Runtime Hooks Feature Narrative

Troy Topnik
 

I think many of the use cases *could* be dealt with by multi-buildpack support or in a forked buildpack which executes the desired commands. However, the point of the feature is to allow the addition of these commands to the staging process without having to create an additional buildpack or make modifications to an existing one.

In my experience, users who know the application framework they work with may not be that familiar with the buildpacks that deploy them (e.g. Java developers trying to figure out the Ruby code in the Java buildpack). This allows them to make small "one off" modifications for a particular application deployment.

TT


Re: Doppler/Firehose - Multiline Log Entry

Eric Malm <emalm@...>
 

Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.

Best,
Eric

On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Thanks for the insight Jim. I still think that the Executor is the place
to fix this since multi line logging isn't a Loggregator limitation it is a
log inject limitation which is owned by the Executor. I'll open an issue
with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

That strategy is going to be hard to sell. Diego's Executor takes the log
lines out of Garden and drops them into dropsonde messages. I doubt they'll
think it's a good idea to implement substitution in that processing. You
can certainly ask Eric - he's very aware of the underlying problem.

After that point, the Loggregator system does it's best to touch messages
as little as possible, and to improve performance and reliability, we have
thinking about the future that will lower the amount of touching ever
further. The next place that log message processing can be done is either
in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I was thinking whoever demarcates and submits the original event to
loggregator. dea_logging_agent and the equivalent in Deigo. Doing it at
that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that transformation
within the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can talk to
them. I'll certainly support you by helping explain the need. I'd think we
want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Jim,

If I submitted a CLI PR to change the cf logs command to substitute
/u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle plug in
<https://github.com/jtuchscherer/nozzle-plugin> for the CLI. It's
attaches to the firehose to display everything, but would be easy to modify
to just look at a single app, and sub out the magic token for newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <youngm(a)gmail.com
wrote:
Hi David,

The problem for me is that I'm searching for a solution that can
works for development (though less of a priority cause you can switch
config between dev and cf) and for viewing logs via "cf logs" in addition
to a log aggregator. I had hoped that /u2028 would work for viewing logs
via "cf logs" but it doesn't in bash. I'd need to write a plugin or
something for cf logs and train all my users to use it. Certainly possible
but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:

FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to
denote line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.

If \u2028 doesn't work in your environment; use something else;
eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <youngm(a)gmail.com>
wrote:

Finally got around to testing this. Preliminary testing show
that "\u2028" doesn't function as a new line character in bash
and causes eclipse console to wig out. I don't think "\u2028"
is a viable long term solution. Hope you make progress on a metric format
available to an app in a container. I too would like a tracker link to
such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using unicode
newline character to fool loggregator (which is looking for \n) into
thinking that it isn't a new log event right? Does \u2028 work as a new
line character when tailing logs in the CLI? Anyone tried this unicode new
line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining an
enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing suggested for
Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n, which
Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline in it
is the only way to make gsub work

}

to replace the token with a regular newline again so it
displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But, in my
conversations with the Loggregator team I think we're basically stuck not
sure what the right thing to do is on the client side. How does the client
trigger in loggregator that this is a multi line log message or what is the
right way for loggregator to detect that the client is trying to send a
multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou <
prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry" issue?
How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free: 855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com | twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io
| 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963


Re: Staging and Runtime Hooks Feature Narrative

Troy Topnik
 

Credit to Jan Dubois, who actually drafted the document, and who will be helping me respond to your comments in the document.

I'll provide concrete use cases where possible. The ones I have immediately to hand are specific to Stackato which implements this feature a little differently and in some cases rely on features not available in CF (e.g. OS package installs). I'll try to generalize the examples, or at least explain how they would be relevant to core CF.

TT


Re: Doppler/Firehose - Multiline Log Entry

Mike Youngstrom <youngm@...>
 

Thanks for the insight Jim. I still think that the Executor is the place
to fix this since multi line logging isn't a Loggregator limitation it is a
log inject limitation which is owned by the Executor. I'll open an issue
with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

That strategy is going to be hard to sell. Diego's Executor takes the log
lines out of Garden and drops them into dropsonde messages. I doubt they'll
think it's a good idea to implement substitution in that processing. You
can certainly ask Eric - he's very aware of the underlying problem.

After that point, the Loggregator system does it's best to touch messages
as little as possible, and to improve performance and reliability, we have
thinking about the future that will lower the amount of touching ever
further. The next place that log message processing can be done is either
in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

I was thinking whoever demarcates and submits the original event to
loggregator. dea_logging_agent and the equivalent in Deigo. Doing it at
that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that transformation within
the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Sounds good. I'll submit an issue to start the discussion. I imagine
the first question Dies will ask though is if you would support something
like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can talk to
them. I'll certainly support you by helping explain the need. I'd think we
want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Jim,

If I submitted a CLI PR to change the cf logs command to substitute
/u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle plug in
<https://github.com/jtuchscherer/nozzle-plugin> for the CLI. It's
attaches to the firehose to display everything, but would be easy to modify
to just look at a single app, and sub out the magic token for newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Hi David,

The problem for me is that I'm searching for a solution that can
works for development (though less of a priority cause you can switch
config between dev and cf) and for viewing logs via "cf logs" in addition
to a log aggregator. I had hoped that /u2028 would work for viewing logs
via "cf logs" but it doesn't in bash. I'd need to write a plugin or
something for cf logs and train all my users to use it. Certainly possible
but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <david(a)davidlaing.com
wrote:
FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to denote
line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.

If \u2028 doesn't work in your environment; use something else;
eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <youngm(a)gmail.com>
wrote:

Finally got around to testing this. Preliminary testing show
that "\u2028" doesn't function as a new line character in bash
and causes eclipse console to wig out. I don't think "\u2028"
is a viable long term solution. Hope you make progress on a metric format
available to an app in a container. I too would like a tracker link to
such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using unicode
newline character to fool loggregator (which is looking for \n) into
thinking that it isn't a new log event right? Does \u2028 work as a new
line character when tailing logs in the CLI? Anyone tried this unicode new
line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining an
enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing suggested for
Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n, which
Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline in it
is the only way to make gsub work

}

to replace the token with a regular newline again so it
displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But, in my
conversations with the Loggregator team I think we're basically stuck not
sure what the right thing to do is on the client side. How does the client
trigger in loggregator that this is a multi line log message or what is the
right way for loggregator to detect that the client is trying to send a
multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou <
prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry" issue? How
correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free: 855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com | twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Re: AWS / CF v233 deployment

Christian Ang
 

Hi Sylvain,

It looks like your problem might be that one or more of the consul certificates in your cf manifest is not a valid PEM encoded certificate, or the certificates are missing entirely. Do the consul properties in your cf manifest look approximately like this (with your own certificates and keys):

https://github.com/cloudfoundry-incubator/consul-release/blob/master/manifests/aws/multi-az-ssl.yml#L122-L261

Also, if you decode your certificates by running `openssl x509 -in server-ca.crt -text -noout`, do they appear to be valid?

If they are invalid you can try regenerating them using `scripts/generate-consul-certs` and copying each files contents into the appropriate place in your cf manifest's consul properties.

Thanks,
Christian and George


HM9000 updates for all pipelines

Michael Fraenkel <michael.fraenkel@...>
 

For those who have pipelines that run DEAs and HM9000, you will need to
update your manifest to include certificate information.
For CF users that consume releases, this will only apply once it makes
it into a release. The release-notes will document the necessary changes.

The following properties need to be filled in your manifest

hm9000:
ca_cert:
server_cert:
server_key:
client_cert:
client_key:

You can use the scripts/generate-hm9000-certs to help generate the
certificates and keys needed. You can look at
cf-infrastructure-bosh-lite.yml for examples of how the data must be
formatted.

If you have any questions, just ask on the runtime-og slack channel.

- Michael (Runtime-OG PM)


Distributed Persistance and Full Text Indexing

hiren patel
 

In traditional environment, we have massive physical servers hosting database and indexing engines (e.g. solr).

How can we achieve similar results with small footprint but large number of services/servers over cloud foundry based offering?

The ultimate goal is to get away from standalone massive physical server to distributed small multiple servers/services without compromising performance.

What service offering on-top of cloud foundry can help achieve this?


Re: Issue with API job while downgrading CF

Nicholas Calugar
 

Hi Anuj,

This isn't going to work as you have a v231 database and the code from
v214, which doesn't contain the migrations between. Also, it's doing a
migrate up, not a migrate down.

While the migrations may be written in a way that they will migrate down,
we don't test, support, or recommend this workflow. You would have to
manually migrate down using code from v231.

Nick

On Wed, Apr 13, 2016 at 1:43 AM Anuj Jain <anuj17280(a)gmail.com> wrote:

Hi,

currently I am running CF v231 and for some reason want to
downgrade/rollback to CF 214 - while trying to do that I am getting below
mentioned error with API VMs

============= Error msg

Started updating job api_z1 > api_z1/2ac60eec-c41d-4b14-b473-ba40907ee77e
(0) (canary). Failed: `api_z1/0 (2ac60eec-c41d-4b14-b473-ba40907ee77e)' is
not running after update (00:10:51)



Error 400007: `api_z1/0 (2ac60eec-c41d-4b14-b473-ba40907ee77e)' is not
running after update

============================

Above error is because of ‘cloud_controller_migration’ job which need to
run on api_z1 jobs – which is failing- when tried restarting the job
manually got error '

/bin/sh: 1: Syntax error: EOF in backquote substitution
' - could some one point me how can I fix it.

cloud controller migration script error:

# /var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_migration_ctl
start

/bin/sh: 1: Syntax error: EOF in backquote substitution

/bin/sh: 1: Syntax error: EOF in backquote substitution

/bin/sh: 1: Syntax error: EOF in backquote substitution

/bin/sh: 1: Syntax error: EOF in backquote substitution

/bin/sh: 1: Syntax error: EOF in backquote substitution

/bin/sh: 1: Syntax error: EOF in backquote substitution


Note: looks like line: 'chpst -u vcap:vcap bundle exec rake db:migrate'
is causing the issue - where it is trying to migrate DB when CC runs first
time.

- Anuj
--
Nicholas Calugar
CAPI Product Manager
Pivotal Software, Inc.


Issue with API job while downgrading CF

Anuj Jain <anuj17280@...>
 

Hi,

currently I am running CF v231 and for some reason want to
downgrade/rollback to CF 214 - while trying to do that I am getting below
mentioned error with API VMs

============= Error msg

Started updating job api_z1 > api_z1/2ac60eec-c41d-4b14-b473-ba40907ee77e
(0) (canary). Failed: `api_z1/0 (2ac60eec-c41d-4b14-b473-ba40907ee77e)' is
not running after update (00:10:51)



Error 400007: `api_z1/0 (2ac60eec-c41d-4b14-b473-ba40907ee77e)' is not
running after update

============================

Above error is because of ‘cloud_controller_migration’ job which need to
run on api_z1 jobs – which is failing- when tried restarting the job
manually got error '

/bin/sh: 1: Syntax error: EOF in backquote substitution
' - could some one point me how can I fix it.

cloud controller migration script error:

# /var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_migration_ctl
start

/bin/sh: 1: Syntax error: EOF in backquote substitution

/bin/sh: 1: Syntax error: EOF in backquote substitution

/bin/sh: 1: Syntax error: EOF in backquote substitution

/bin/sh: 1: Syntax error: EOF in backquote substitution

/bin/sh: 1: Syntax error: EOF in backquote substitution

/bin/sh: 1: Syntax error: EOF in backquote substitution


Note: looks like line: 'chpst -u vcap:vcap bundle exec rake db:migrate' is
causing the issue - where it is trying to migrate DB when CC runs first
time.

- Anuj


cf push working

Ankur Srivastava <ankursri1@...>
 

Hi,
What does cf push do in the background ? What all steps are done by cf
push.

Regards,
Ankur


Re: Staging and Runtime Hooks Feature Narrative

John Shahid
 

Hi Troy,

Thanks for putting together this proposal. I added some comments/questions
to the document. I would love your feedback/response on those. Most of the
comments are concerned with the lack of concrete use cases. I think adding
a few examples to each use case will clarify the value added by the hooks.

Cheers,

JS

On Mon, Apr 11, 2016 at 1:04 PM Mike Youngstrom <youngm(a)gmail.com> wrote:

An interesting proposal. Any thoughts about this proposal in relation to
multi-buildpacks [0]? How many of the use cases for this feature go away
in lue of multi-buildpack support? I think it would be interesting to be
able to apply hooks without checking scripts into application (like
multi-bulidpack).

This feature also appears to be somewhat related to [1]. I hope that
someone is overseeing all these newly proposed buildpack features to help
ensure they are coherent.

Mike


[0]
https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/message/H64GGU6Z75CZDXNWC7CKUX64JNPARU6Y/
[1]
https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/thread/GRKFQ2UOQL7APRN6OTGET5HTOJZ7DHRQ/#SEA2RWDCAURSVPIMBXXJMWN7JYFQICL3

On Fri, Apr 8, 2016 at 4:16 PM, Troy Topnik <troy.topnik(a)hpe.com> wrote:

This feature allows developers more control of the staging and deployment
of their application code, without them having to fork existing buildpacks
or create their own.


https://docs.google.com/document/d/1PnTtTLwXOTG7f70ilWGlbTbi1LAXZu9zYnrUVvjr31I/edit

Hooks give developers the ability to optionally:
* run scripts in the staging container before and/or after the
bin/compile scripts executed by the buildpack, and
* run scripts in each app container before the app starts (via .profile
as per the Heroku buildpack API)

A similar feature has been available and used extensively in Stackato for
a few years, and we'd like to contribute this functionality back to Cloud
Foundry.

A proof-of-concept of this feature has already been submitted as a pull
request, and the Feature Narrative addresses many of the questions raised
in the PR discussion:


https://github.com/cloudfoundry-incubator/buildpack_app_lifecycle/pull/13

Please weigh in with comments in the document itself or in this thread.

Thanks,

TT


Re: vcap.component.discover in 233

Matt Cholick
 

Got it. Thanks for the links to the commits, appreciate it.

On Tue, Apr 12, 2016 at 6:19 PM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

For the DEA's, I think it was removed here [1] with the removal of
`vcap/component`. For health manager, I believe it was removed here [2]
when the `cfcomponent` dependency was removed.

In both cases, it was part of a move away from /varz to the collector and
a change from using nats for discovery to consul.

[1]:
https://github.com/cloudfoundry/dea_ng/commit/57581eef6532d87f736145ae4005880e009cf956
[2]:
https://github.com/cloudfoundry/hm9000/commit/7c343033a598b891193ee12278d92b786a35697f

On Tue, Apr 12, 2016 at 3:44 PM, Matt Cholick <cholick(a)gmail.com> wrote:

We recently upgrade from 228 to 233. Since then, one thing I'm seeing is
that neither DEA nor HM9000 is responding to a vcap.component.discover on
nats. Is this expected? I've dug a bit and haven't found a commit yet.
Wondering if the bug is in our environment/config or our expectations.

CloudController and etc are examples of some things that still respond to
this request.

-Matt Cholick


--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: AWS / CF v233 deployment

Bharath
 

GIt by sri eeo olold lollipop look Kim o gift forgot guitarists u my look ed
On 12 Apr 2016 21:37, "Sylvainl Gibier" <sylvain(a)munichconsulting.de>
wrote:gfggkuflg
this
Hi, lol touching inka

Trying to a fresh new installation of CF on AWS - I'm track
gigg.org/deploying/aws/cf-stub.html koi8Logan's other as w rell
http://docs.cloudfoundry.org/deploying/common/consul-security.html to
generate the consul certificates.

According to the log - the consult_agent process failed to start.

{"timestamp":"1460475448.173879385","source":"confab","message":"confab.agent-client.verify-joined.member
fb.request.failed","log_level":2,"data":{"error":"Get
http://127.0.0.1:8500/v1/agent/members: dial tcp 127.0.0.1:8

500: getsockopt: connection refused","wan":false}}

==> Starting Consul agent...

==> Erroinyvfr starting agent: Failed to start Consul client: Failed to
load cert/key pair: carypto/tls: failed to parse certificate PEM data


How can I debyug this issue ?

Cheers,

sylvain


Re: [cf-bosh] Re: Re: [Metron Agent] failed to generate job templates with metron agent on top of OpenStack Dynamic network

Yitao Jiang
 

Thanks Amit.

If using dynamic network, the OpenStack support allocate ip address, using
the ip address to configure metron_agent and launch vm, will it possible
solve the issue?

On Tue, Apr 12, 2016 at 1:24 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

This will not work with dynamic networks. Many jobs in cf-release rely on
data from BOSH to determine their IP so that configuration files can be
rendered up-front by the director rather than at runtime, requiring system
calls to determine IP. metron_agent is one such job, and it tends to be
colocated with each other job (it is what allows all system component logs
to be aggregated through the loggregator system), so this would require all
Cloud Foundry VMs to be on a manual network. You don't need to manually
pick the IPs, you just need to tell BOSH which IPs in the network not to
use and specify these in the "reserved" range.

Since so many different components depend on being able to determine their
IP via BOSH data, there's no quick workaround if you want to stick to using
dynamic networks, but we're aware of this current limitation.

Best,
Amit

On Mon, Apr 11, 2016 at 7:23 PM, Yitao Jiang <jiangyt.cn(a)gmail.com> wrote:

Is it a bug of CF or Bosh ?

On Fri, Apr 8, 2016 at 12:08 PM, Ben R <vagcom.ben(a)gmail.com> wrote:

I have the same issue. It has to do with every release since bosh 248.
However, dynamic networks with older bosh releases + cf-231/cf-231 work.

This must be a bug.

Ben R


On Thu, Apr 7, 2016 at 8:55 PM, Yitao Jiang <jiangyt.cn(a)gmail.com>
wrote:

Hi,guys

When deploy CF on top of OpenStack with dynamic network, the jobs
failed with metron-agent
Error filling in template 'syslog_forwarder.conf.erb' (line 44:
undefined method `strip' for nil:NilClass)

here's related logs

​D​
etecting deployment changes
----------------------------
Releases
cf
version type changed: String -> Fixnum
- 233
+ 233

Compilation
No changes

Update
± canaries:
- 1
+ 0

Resource pools
No changes

Disk pools
No changes

Networks
dynamic-net
+ name: dynamic-net
subnets
10.0.0.0/24
cloud_properties
+ net_id: 0700ae03-4b38-464e-b40d-0a9c8dd18ff0
+ security_groups: ["Test OS SG_20160128T070152Z"]
+ dns: ["114.114.114.114", "8.8.8.8"]

+ range: 10.0.0.0/24
+ name: Test OS Sub Internal Network_20160128T070152Z

+ type: dynamic


Jobs
stats_z1

± networks:

- {"name"=>"cf1"}

+ {"name"=>"dynamic-net"}


Properties
No changes


Meta
No changes


Please review all changes carefully

Deploying
---------

Are you sure you want to deploy? (type 'yes' to continue): yes


Director task 57
Started preparing deployment > Preparing deployment. Done (00:00:03)


Error 100: Unable to render instance groups for deployment. Errors
are:
- Unable to render jobs for instance group 'stats_z1'. Errors are:

- Unable to render templates for job 'metron_agent'. Errors are:

- Error filling in template 'syslog_forwarder.conf.erb' (line
44: undefined method `strip' for nil:NilClass)

Task 57 error

For a more detailed error report, run: bosh task 57 --debug
as the ip manged by OpenStack, bosh cannot get the actual ip address of
each vm until vm alive, this lead to the generated job spec doesn't contain
ip address infos
so, must i have to configure network type to manual?​

snippets of deployment yml

1001 - name: dynamic-net
1002 subnets:
1003 - cloud_properties:
1004 net_id: 0700ae03-4b38-464e-b40d-0a9c8dd18ff0
1005 security_groups:
1006 - Test OS SG_20160128T070152Z
1007 dns:
1010 - 114.114.114.114
1011 - 8.8.8.8
1012 range: 10.0.0.0/24
1013 name: Test OS Sub Internal Network_20160128T070152Z
1014 type: dynamic

​Rendered job spec

{"deployment"=>"staging-01", "job"=

{"name"=>"stats_z1", "templates"=>[{"name"=>"collector",
"version"=>"6c210292f18d129e9a037fe7053836db2d494344",
"sha1"=>"38927f47b15c2daf6c8a2e7c760e73e5ff90
dfd4", "blobstore_id"=>"23531029-0ee1-4267-8863-b5f931afaecb"},
{"name"=>"metron_agent",
"version"=>"2b80a211127fc642fc8bb0d14d7eb30c37730db3", "sha1"=>"150f2
7445c2ef960951c1f26606525d41ec629b2",
"blobstore_id"=>"e87174dc-f3f7-4768-94cd-74f299813528"}],
"template"=>"collector", "version"=>"6c210292f18d129e9a037fe70
53836db2d494344", "sha1"=>"38927f47b15c2daf6c8a2e7c760e73e5ff90dfd4",
"blobstore_id"=>"23531029-0ee1-4267-8863-b5f931afaecb"}, "index"=>0,
"bootstrap"=>true,
"name"=>"stats_z1", "id"=>"99f349d0-fb5d-4de7-9912-3de5559d2f19",
"az"=>nil,

*"networks"=>{"dynamic-net"=>{"type"=>"dynamic",
"cloud_properties"=>{"net_id"=>"0700ae03-4b38-464e-b40d-0a9c8dd18ff0",
"security_groups"=>["Test OS SG_20160128T070152Z"]},
"dns"=>["114.114.114.114", "8.8.8.8", "10.0.0.13"], "default"=>["dns",
"gateway"],
"dns_record_name"=>"0.stats-z1.dynamic-net.staging-01.microbosh"}}*,
"properties"=>{"collector"=>{"aws"=>{
"access_key_id"=>nil, "secret_access_key"=>nil},
"datadog"=>{"api_key"=>nil, "application_key"=>nil},
"deployment_name"=>nil, "logging_level"=>"info", "interv
als"=>{"discover"=>60, "healthz"=>30, "local_metrics"=>30,
"nats_ping"=>30, "prune"=>300, "varz"=>30}, "use_aws_cloudwatch"=>false,
"use_datadog"=>false, "use
_tsdb"=>false, "opentsdb"=>{"address"=>nil, "port"=>nil},
"use_graphite"=>false, "graphite"=>{"address"=>nil, "port"=>nil},
"memory_threshold"=>800}, "nats"=>
{"machines"=>["10.0.0.127"], "password"=>"NATS_PASSWORD",
"port"=>4222, "user"=>"NATS_USER"},
"syslog_daemon_config"=>{"address"=>nil, "port"=>nil, "transport
"=>"tcp", "fallback_addresses"=>[], "custom_rule"=>"",
"max_message_size"=>"4k"},
"metron_agent"=>{"dropsonde_incoming_port"=>3457, "preferred_protocol"=>"udp
", "tls"=>{"client_cert"=>"", "client_key"=>""}, "debug"=>false,
"zone"=>"z1", "deployment"=>"ya-staging-01",
"tcp"=>{"batching_buffer_bytes"=>10240, "batchin
g_buffer_flush_interval_milliseconds"=>100},
"logrotate"=>{"freq_min"=>5, "rotate"=>7, "size"=>"50M"},
"buffer_size"=>10000, "enable_buffer"=>false}, "metron_
endpoint"=>{"shared_secret"=>"LOGGREGATOR_ENDPOINT_SHARED_SECRET"},
"loggregator"=>{"tls"=>{"ca_cert"=>""}, "dropsonde_incoming_port"=>3457,
"etcd"=>{"machine
s"=>["10.0.0.133"], "maxconcurrentrequests"=>10}}},
"dns_domain_name"=>"microbosh", "links"=>{},
"address"=>"99f349d0-fb5d-4de7-9912-3de5559d2f19.stats-z1.dyn
amic-net.ya-staging-01.microbosh", "persistent_disk"=>0,
"resource_pool"=>"small_z1"}​

--

Regards,

Yitao

--

Regards,

Yitao
--

Regards,

Yitao


Re: vcap.component.discover in 233

Matthew Sykes <matthew.sykes@...>
 

For the DEA's, I think it was removed here [1] with the removal of
`vcap/component`. For health manager, I believe it was removed here [2]
when the `cfcomponent` dependency was removed.

In both cases, it was part of a move away from /varz to the collector and a
change from using nats for discovery to consul.

[1]:
https://github.com/cloudfoundry/dea_ng/commit/57581eef6532d87f736145ae4005880e009cf956
[2]:
https://github.com/cloudfoundry/hm9000/commit/7c343033a598b891193ee12278d92b786a35697f

On Tue, Apr 12, 2016 at 3:44 PM, Matt Cholick <cholick(a)gmail.com> wrote:

We recently upgrade from 228 to 233. Since then, one thing I'm seeing is
that neither DEA nor HM9000 is responding to a vcap.component.discover on
nats. Is this expected? I've dug a bit and haven't found a commit yet.
Wondering if the bug is in our environment/config or our expectations.

CloudController and etc are examples of some things that still respond to
this request.

-Matt Cholick

--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: CF212 - golang app crash - how to perform the troubleshooting?

Gwenn Etourneau
 

These are env. variable in Go you can easily get it with os.Getenv("
VCAP_SERVICES") or you can use go-cfenv [1] as helper.


[1] https://github.com/cloudfoundry-community/go-cfenv/

On Wed, Apr 13, 2016 at 1:38 AM, JT Archie <jarchie(a)pivotal.io> wrote:

They are passed via environment variables and then your start command is
used to start the app.

This is an example of the startup script
<https://github.com/cloudfoundry/dea_ng/blob/master/lib/dea/starting/startup_script_generator.rb>
generated.

Does this help?


On Tue, Apr 12, 2016 at 7:52 AM, Rafal Radecki <radecki.rafal(a)gmail.com>
wrote:

Hi :)

I am starting an app and it crashes. In the meantime when it is trying to
start I login to appropriate warden container and after the app crashes I
try to start the go binary manually. The problem is that I am not able to
see in the environment variables like VCAP_SERVICES and I cannot pass its
content correctly to my app.

Can you tell me how to start my app inside the warden container once I
login through wsh with proper environment setup? I tried below:

root(a)19g7korv77m:/app# export
VCAP_SERVICES='{"mongodb26":[{"credentials":{"dbname":"...}
root(a)19g7korv77m:/app# bin/my_go_binary

but it is obviously not the proper way. How does cloudfoundry pass
variables to the process started in the warden container?

BR,
Rafal.


Re: Request for Multibuildpack Use Cases

Mike Youngstrom <youngm@...>
 

Thanks for the clarification Danny. I guess the point I was trying to make
earlier with the "oracle_library" buildpack use case, is that this feature
has some potential functional commonality with the proposed binary service
broker feature and the pre-post hook proposed features. So, my hope is
that all these features can be considered together to help ensure a coherent
solution across this entire buildpack extensiblity space that is being
explored right now.

Thanks,
Mike

On Tue, Apr 12, 2016 at 4:16 PM, Danny Rosen <drosen(a)pivotal.io> wrote:

As a starting point we have decided to work with officially provided
buildpacks as their behavior is known and controlled by the buildpacks
team. By discovering use cases (thank you John, Jack and David for your
examples) we can start work towards implementing multibuildpack solutions
that would be open to the community to consume and iterate on.
On Apr 12, 2016 1:06 PM, "Mike Youngstrom" <youngm(a)gmail.com> wrote:

John,

It sounds like the buildpack team is thinking the multi buildpack feature
would only work for buildpacks they provide not a custom
"dependency-resolution" buildpack. Or at least that is how I understood
the message from Danny Rosen earlier in the thread.

Mike

On Tue, Apr 12, 2016 at 10:45 AM, John Feminella <jxf(a)pivotal.io> wrote:

Multibuildpack is absolutely useful and I'm excited for this proposal.

I encounter a lot of use cases for this. The most common is that an
application wants to pull in private dependencies during a future
dependency-resolution step of a later buildpack, but the dependency
resolver needs to be primed in some specific way. If you wait until
buildpack time it's too late.

On Heroku, for example, this is accomplished by having something like
the netrc buildpack (
https://github.com/timshadel/heroku-buildpack-github-netrc), adding a
GITHUB_TOKEN environment variable, and then running your "real" buildpack.
The netrc BP runs first, allowing Bundler to see the private dependencies.

best,
~ jf

On Tue, Apr 12, 2016 at 12:36 PM Jack Cai <greensight(a)gmail.com> wrote:

It would be more useful if the multi-buildpack can reference an admin
buildpack in addition to a remote git-hosted buildpack. :-)

Jack


On Tue, Apr 12, 2016 at 6:38 AM, David Illsley <davidillsley(a)gmail.com>
wrote:

In the past we've used the multi-buildpack to be able to use ruby sass
to compile SCSS for non-ruby projects (node and Java). In that case we used
the multi-buildpack and a .buildpacks file which worked reasonably well
(and was very clear).

On Mon, Apr 11, 2016 at 1:15 AM, Danny Rosen <drosen(a)pivotal.io>
wrote:

Hi there,

The CF Buildpacks team is considering taking on a line of work to
provide more formal support for multibuildpacks. Before we start, we would
be interested in learning if any community users have compelling use cases
they could share with us.

For more information on multibuildpacks, see Heroku's documentation
[1]

[1] -
https://devcenter.heroku.com/articles/using-multiple-buildpacks-for-an-app

4821 - 4840 of 9429