Date   

Re: Retrieving First and Last name

Filip Hanik
 

get an id_token,

when you call the /oauth/token endpoint, you usually have a parameter like
"response_type=token"

if you change that to

"response_type=token id_token" you will get two tokens back. One is the
access token, the other is the user information token, also called id_token

On Wed, Oct 5, 2016 at 6:43 PM, Bryan Perino <Bryan.Perino(a)gmail.com> wrote:

I was wondering if there was any way to get a user's first name and last
name from an access token? It returns the following currently as a map:

map = {LinkedHashMap(a)11550} size = 18
0 = {LinkedHashMap$Entry(a)11556} "jti" -> "671c95ef3fd043b7a6ebef9a14521a
c5"
1 = {LinkedHashMap$Entry(a)11557} "sub" -> "dd2b52b4-9a83-49b3-952d-
90781c3070e5"
2 = {LinkedHashMap$Entry(a)11558} "scope" -> " size = 4"
3 = {LinkedHashMap$Entry(a)11559} "client_id" -> "user-dashboard"
4 = {LinkedHashMap$Entry(a)11560} "cid" -> "user-dashboard"
5 = {LinkedHashMap$Entry(a)11561} "azp" -> "user-dashboard"
6 = {LinkedHashMap$Entry(a)11562} "grant_type" -> "authorization_code"
7 = {LinkedHashMap$Entry(a)11563} "user_id" -> "dd2b52b4-9a83-49b3-952d-
90781c3070e5"
8 = {LinkedHashMap$Entry(a)11564} "origin" -> "uaa"
9 = {LinkedHashMap$Entry(a)11565} "user_name" -> "marissa"
10 = {LinkedHashMap$Entry(a)11566} "email" -> "marissa(a)test.org"
11 = {LinkedHashMap$Entry(a)11567} "auth_time" -> "1475714185"
12 = {LinkedHashMap$Entry(a)11568} "rev_sig" -> "2eb735bc"
13 = {LinkedHashMap$Entry(a)11569} "iat" -> "1475714185"
14 = {LinkedHashMap$Entry(a)11570} "exp" -> "1475757385"
15 = {LinkedHashMap$Entry(a)11571} "iss" -> "http://localhost:8080/uaa/
oauth/token"
16 = {LinkedHashMap$Entry(a)11572} "zid" -> "uaa"
17 = {LinkedHashMap$Entry(a)11573} "aud" -> " size = 4"


Thanks for any help.


Retrieving First and Last name

Bryan Perino
 

I was wondering if there was any way to get a user's first name and last name from an access token? It returns the following currently as a map:

map = {LinkedHashMap(a)11550} size = 18
0 = {LinkedHashMap$Entry(a)11556} "jti" -> "671c95ef3fd043b7a6ebef9a14521ac5"
1 = {LinkedHashMap$Entry(a)11557} "sub" -> "dd2b52b4-9a83-49b3-952d-90781c3070e5"
2 = {LinkedHashMap$Entry(a)11558} "scope" -> " size = 4"
3 = {LinkedHashMap$Entry(a)11559} "client_id" -> "user-dashboard"
4 = {LinkedHashMap$Entry(a)11560} "cid" -> "user-dashboard"
5 = {LinkedHashMap$Entry(a)11561} "azp" -> "user-dashboard"
6 = {LinkedHashMap$Entry(a)11562} "grant_type" -> "authorization_code"
7 = {LinkedHashMap$Entry(a)11563} "user_id" -> "dd2b52b4-9a83-49b3-952d-90781c3070e5"
8 = {LinkedHashMap$Entry(a)11564} "origin" -> "uaa"
9 = {LinkedHashMap$Entry(a)11565} "user_name" -> "marissa"
10 = {LinkedHashMap$Entry(a)11566} "email" -> "marissa(a)test.org"
11 = {LinkedHashMap$Entry(a)11567} "auth_time" -> "1475714185"
12 = {LinkedHashMap$Entry(a)11568} "rev_sig" -> "2eb735bc"
13 = {LinkedHashMap$Entry(a)11569} "iat" -> "1475714185"
14 = {LinkedHashMap$Entry(a)11570} "exp" -> "1475757385"
15 = {LinkedHashMap$Entry(a)11571} "iss" -> "http://localhost:8080/uaa/oauth/token"
16 = {LinkedHashMap$Entry(a)11572} "zid" -> "uaa"
17 = {LinkedHashMap$Entry(a)11573} "aud" -> " size = 4"


Thanks for any help.


Notice of change to Loggregator in CF 244

Allen Duet <aduet@...>
 

As part of our ongoing efforts to improve reliability of Loggregator we are
in the process of removing a deprecated feature of Loggregator.

The Loggregator_consumer library will be completely removed from
Loggregator from CF 245 forward. This library has been deprecated for some
time now in favor of using noaa <https://github.com/cloudfoundry/noaa> to
access logs and metrics from Loggregator's firehose.

The Loggregator Consumer and it's discovery points will be removed from the
following:
1) CF template has been edited to remove route registrar of the end point
2) Cloud Controller API will no longer advertise the endpoint in v2/info
result
3) The Loggregator_consumer library will no longer be distributed with
Loggregator.

The source for Loggregator_consumer source has been moved to
cloudfoundry-attic and can be found here:
https://github.com/cloudfoundry-attic/loggregator_consumer

*Removal of the route registrar for the endpoint was included in CF 244.
This effectively makes the endpoint inaccessible in CF 244. *

CF 244 release notes have been updated with this information
https://github.com/cloudfoundry/cf-release/releases/tag/v244#loggregator

We apologize for the late notification of this change. If this change in
CF 244 has caused an issue please contact us on the Loggregator Slack
channel.

Please respond with any concerns or issues with this change moving forward.

Regards,
Allen Duet
PM, Loggregator


Re: Forwrad container metrics to syslog drain

Daniel Mikusa
 

The app_logs demo is just logs. Look at either the firehose demo or the
container_metrics demo.

https://github.com/cloudfoundry/noaa#logs-and-metrics-firehose

or

https://github.com/cloudfoundry/noaa#container-metrics

Dan

On Wed, Oct 5, 2016 at 11:40 AM, Mehran Saliminia <msaliminia(a)gmail.com>
wrote:

I have executed this sample app for 5 minutes:
https://github.com/cloudfoundry/noaa/blob/master/
samples/app_logs/main.go/#L32-L39

As the sample app comments I also expected to get ContainerMetrics as well
but I only receive eventType:LogMessage.


Re: Forwrad container metrics to syslog drain

Mehran Saliminia
 

I have executed this sample app for 5 minutes:
https://github.com/cloudfoundry/noaa/blob/master/samples/app_logs/main.go/#L32-L39

As the sample app comments I also expected to get ContainerMetrics as well but I only receive eventType:LogMessage.


Re: v235->v239 bosh-lite release deployment has issues creating and using fog blobstore, disabling cf - how can I get the director to create blobstore directories?

James Tuddenham
 

Update - our resolution for this was to switch from using fog for the blobstore in the bosh-lite test environment to using webdav.


Re: Forwrad container metrics to syslog drain

Johannes Hiemer <jvhiemer@...>
 

That's very neat. Thanks for that hint Johannes!

On 5 Oct 2016, at 16:18, Johannes Tuchscherer <jtuchscherer(a)pivotal.io> wrote:

That is not quite true. From the Firehose you can get ContainerMetrics for an app as long as you have access to that app. The Firehose CLI plugin for example shows all application related messages - log messages, httpstartstop events and container metrics. And that plugin works even for a 'normal' space developer.

Here is the method from the noaa library that you can use to get access to that data: https://github.com/cloudfoundry/noaa/blob/master/consumer/async.go#L53-L55

Johannes

On Wed, Oct 5, 2016 at 4:09 PM Mehran Saliminia <msaliminia(a)gmail.com> wrote:
thank you for your response! it works for app logs with a regular user token but streaming ContainerMetrics will only succeed if you have admin credentials.


Re: Forwrad container metrics to syslog drain

Johannes Tuchscherer
 

That is not quite true. From the Firehose you can get ContainerMetrics for
an app as long as you have access to that app. The Firehose CLI plugin for
example shows all application related messages - log messages,
httpstartstop events and container metrics. And that plugin works even for
a 'normal' space developer.

Here is the method from the noaa library that you can use to get access to
that data:
https://github.com/cloudfoundry/noaa/blob/master/consumer/async.go#L53-L55

Johannes

On Wed, Oct 5, 2016 at 4:09 PM Mehran Saliminia <msaliminia(a)gmail.com>
wrote:

thank you for your response! it works for app logs with a regular user
token but streaming ContainerMetrics will only succeed if you have admin
credentials.


Re: Forwrad container metrics to syslog drain

Mehran Saliminia
 

thank you for your response! it works for app logs with a regular user token but streaming ContainerMetrics will only succeed if you have admin credentials.


Re: Forwrad container metrics to syslog drain

Daniel Mikusa
 

I think this might also be helpful.

https://github.com/cloudfoundry/noaa#sample-applications

Sample app that shows how to connect to loggregator and stream container
metrics. It can run as an admin or regular user. I think it's probably
the simplest example of how to get container metrics from loggregator.

Dan


On Wed, Oct 5, 2016 at 9:21 AM, Geoff Franks <geoff(a)starkandwayne.com>
wrote:

Ah, in that case, check out https://github.com/
cloudfoundry-community/kibana-me-logs and https://github.com/
cloudfoundry-community/docker-boshrelease in conjunction with
https://github.com/cloudfoundry-community/logstash-docker-boshrelease.

kibana-me-logs is a cf plugin that takes an app, uses the
logstash-docker-boshrelease to create a logging service for the app, and
then pushes a kibana app bound to the same service, so that users can see
their app's log data in kibana.

If you don't want to use kibana, you can probably find the relevant info
for the service broker in the docker-boshrelease, for being able to
provision a service that provides a syslog drain URL for an application.

On Oct 5, 2016, at 1:41 AM, Mehran Saliminia <msaliminia(a)gmail.com>
wrote:

yes, but it needs admin credentials to fetch the container metrics. We
want to deliver metrics to the syslog drain endpoint which user binds to
his application as a third party log management system.


Re: Forwrad container metrics to syslog drain

Geoff Franks <geoff@...>
 

Ah, in that case, check out https://github.com/cloudfoundry-community/kibana-me-logs and https://github.com/cloudfoundry-community/docker-boshrelease in conjunction with https://github.com/cloudfoundry-community/logstash-docker-boshrelease.

kibana-me-logs is a cf plugin that takes an app, uses the logstash-docker-boshrelease to create a logging service for the app, and then pushes a kibana app bound to the same service, so that users can see their app's log data in kibana.

If you don't want to use kibana, you can probably find the relevant info for the service broker in the docker-boshrelease, for being able to provision a service that provides a syslog drain URL for an application.

On Oct 5, 2016, at 1:41 AM, Mehran Saliminia <msaliminia(a)gmail.com> wrote:

yes, but it needs admin credentials to fetch the container metrics. We want to deliver metrics to the syslog drain endpoint which user binds to his application as a third party log management system.


Re: Forwrad container metrics to syslog drain

Mehran Saliminia
 

yes, but it needs admin credentials to fetch the container metrics. We want to deliver metrics to the syslog drain endpoint which user binds to his application as a third party log management system.


Re: Manifests for Postgressql , Mongodb

Dr Nic Williams <drnicwilliams@...>
 

For Mongo/PostgresSQL there a couple options:
* docker-boshrelease and configure the PG/Mongo/whatever services you want; these are Dev services - no backups let alone continuous archiving; no HA* https://github.com/dingotiles/dingo-postgresql-release provides clusters of HA PG, with continuous archiving to remote object store
I'm unsure about MongoDB service brokers with continous archiving & HA. Perhaps enquirer with Anynines who may have a commercial solution.
For the two bullet points above, there are not Azure specific templates but if you know enough about Azure then you can insert the specific Azure bits in lieu of the Warden template.
Or ping me direct to figure out where you're at and what help you need.

CheersDr Nic

On Wed, Oct 5, 2016 at 3:43 AM +1000, "vijay kumar Mattewada" <matt.vijay(a)gmail.com> wrote:










Hi Team,

How are you?
I need help  related to cloud foundry stuff.
I need to deploy postgressql , mongodb using manifest on azure bosh cloudfoundry.
Please share the manifest for the deployments related to postgres and mongodb services.
Please suggest me.


RegardsVijay


Re: Failure of all metron_agents in Cloudfoundry

Sylvain Goulmy <sygoulmy@...>
 

Hi Dan,

Thank you for your feedback.

Everything was working fine except the metron_agent job on each VM.

Actually we solved that issue by restart the etcd. We have three etcd
servers, the first restart (monit restart all) didn't change anything (all
the jobs were restarted except the metron_agent), but by restarting the
second one everything went fine again and all the metron_agent went up and
running again on each VM.

I have also checked the etcd table content (before restarting) and
everything looked fine.

Unfortunetely we haven't understood the root cause of that behaviour that
already happened before.

Sylvain

On Tue, Oct 4, 2016 at 5:30 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

If the ETCD servers are working, you might try restarting your dopplers.
The dopplers register in ETCD and Metron looks at ETCD to locate the
dopplers. If ETCD is working and you're getting this message, it could be
that there are no registered dopplers. Restarting the dopplers will cause
them to register again.

Hope that helps!

Dan

On Tue, Oct 4, 2016 at 10:12 AM, Eric Boucherie <erboucherie(a)airfrance.fr>
wrote:

Hi all , i had a strange behaviour happening on our CloudFoundry platform
(v237 running on OpenStack) .
All metron_agents (for all Vms) were failing to start with the same
following error in the log file :

goroutine 31 [running]:
panic(0x9dca60, 0xc8201a6060)
/var/vcap/data/packages/golang1.6/85a489b7c0c2584aa9e0a6dd83
666db31c6fc8e8.1-1857aa617c2de427d5ece149206ba48dda411635/src/runtime/panic.go:464
+0x3e6
metron/clientreader.Read(0xc820107ce0, 0xc82013e280, 0x1, 0x1, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, ...)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/c
lientreader/client_reader.go:17 +0x22b
main.initializeDopplerPool.func1(0xc820107ce0, 0xc82013e280, 0x1, 0x1,
0xc82006a870, 0xc8200b99c0, 0xc820107dd0, 0xc820107d10)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:180
+0xe9
created by main.initializeDopplerPool
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:184
+0xc9e
panic: No dopplers listening on [udp]

I looked at the Doppler process log , and everything looked fine .
I presumed that the error could come from etcd servers but i couldn't see
any relevant errors in their logs .
So has anyone an idea of the reason of that error ?

Thanks in advance .
Eric


Re: Failure of all metron_agents in Cloudfoundry

Daniel Mikusa
 

If the ETCD servers are working, you might try restarting your dopplers.
The dopplers register in ETCD and Metron looks at ETCD to locate the
dopplers. If ETCD is working and you're getting this message, it could be
that there are no registered dopplers. Restarting the dopplers will cause
them to register again.

Hope that helps!

Dan

On Tue, Oct 4, 2016 at 10:12 AM, Eric Boucherie <erboucherie(a)airfrance.fr>
wrote:

Hi all , i had a strange behaviour happening on our CloudFoundry platform
(v237 running on OpenStack) .
All metron_agents (for all Vms) were failing to start with the same
following error in the log file :

goroutine 31 [running]:
panic(0x9dca60, 0xc8201a6060)
/var/vcap/data/packages/golang1.6/85a489b7c0c2584aa9e0a6dd83666d
b31c6fc8e8.1-1857aa617c2de427d5ece149206ba48dda411635/src/runtime/panic.go:464
+0x3e6
metron/clientreader.Read(0xc820107ce0, 0xc82013e280, 0x1, 0x1, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, ...)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/
clientreader/client_reader.go:17 +0x22b
main.initializeDopplerPool.func1(0xc820107ce0, 0xc82013e280, 0x1, 0x1,
0xc82006a870, 0xc8200b99c0, 0xc820107dd0, 0xc820107d10)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:180
+0xe9
created by main.initializeDopplerPool
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:184
+0xc9e
panic: No dopplers listening on [udp]

I looked at the Doppler process log , and everything looked fine .
I presumed that the error could come from etcd servers but i couldn't see
any relevant errors in their logs .
So has anyone an idea of the reason of that error ?

Thanks in advance .
Eric


Failure of all metron_agents in Cloudfoundry

Eric Boucherie
 

Hi all , i had a strange behaviour happening on our CloudFoundry platform (v237 running on OpenStack) .
All metron_agents (for all Vms) were failing to start with the same following error in the log file :

goroutine 31 [running]:
panic(0x9dca60, 0xc8201a6060)
/var/vcap/data/packages/golang1.6/85a489b7c0c2584aa9e0a6dd83666db31c6fc8e8.1-1857aa617c2de427d5ece149206ba48dda411635/src/runtime/panic.go:464 +0x3e6
metron/clientreader.Read(0xc820107ce0, 0xc82013e280, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/clientreader/client_reader.go:17 +0x22b
main.initializeDopplerPool.func1(0xc820107ce0, 0xc82013e280, 0x1, 0x1, 0xc82006a870, 0xc8200b99c0, 0xc820107dd0, 0xc820107d10)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:180 +0xe9
created by main.initializeDopplerPool
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:184 +0xc9e
panic: No dopplers listening on [udp]

I looked at the Doppler process log , and everything looked fine .
I presumed that the error could come from etcd servers but i couldn't see any relevant errors in their logs .
So has anyone an idea of the reason of that error ?

Thanks in advance .
Eric


Re: Forwrad container metrics to syslog drain

Geoff Franks <geoff@...>
 

Have you taken a look at firehose-to-syslog (https://github.com/cloudfoundry-community/firehose-to-syslog)?

On Oct 4, 2016, at 8:59 AM, Carlo Alberto Ferraris <carlo.ferraris(a)rakuten.com> wrote:

Will just drop this here in case somebody wants to add something: https://github.com/cloudfoundry/loggregator/issues/150


Re: Forwrad container metrics to syslog drain

Carlo Alberto Ferraris
 

Will just drop this here in case somebody wants to add something: https://github.com/cloudfoundry/loggregator/issues/150


Forwrad container metrics to syslog drain

Mehran Saliminia
 

Hi,

does anybody know what is the best way to forward container metrics to a user-specified syslog drain?


Re: DNS and the blobstore

Benjamin Gandon
 

The .cf.internal DNS domain is taken over by Consul directly whereas DNS requests for all other domains are forwarded to "recursors", which are the external DNS servers used by your bosh deployment.

The config for Consul is mostly handled by a golang shim named "confab", instead of Bosh.

/Benjamin

Le 29 août 2016 à 15:49, Neil Watson <neil(a)watson-wilson.ca> a écrit :

From http://docs.cloudfoundry.org/deploying/common/vsphere-vcloud-cf-stub.html "Note: vSphere defaults to using an internal WebDAV blobstore for the Cloud Controller. " Then the spiff merge creates an internal URL for the blobstore: "blobstore.service.cf.internal". Then I see the api server doing a DNS lookup for that URL. Where is that DNS record defined?