Proposal for Service Discovery within Elastic Clusters
Hi all, If you are familiar with the proposed Elastic Clusters feature narrative (or even if you're not), I welcome you to take a look at the below proposal for Consul-based service discovery for components within an Elastic Clusters configuration. https://docs.google.com/document/d/1aMpIXsPpB6O_oJsoazOl7kGajc3sRnlrflQFCtysfx8/editFeedback is eagerly solicited. Links to background information on Elastic Clusters can be found within the above doc. Thanks, Amit, CF Infrastructure team PM
|
|
Re: cf v233 api_z1/api_z2 failing
I tried v231. Unfortunately, same issue.
|
|
Re: 404 not found: Requested route ('app.domain') does not exist. -- Worked fine this morning!
Tom Sherrod <tom.sherrod@...>
Tracked down the issue, route_emitter instance had disappeared. Reviewing https://docs.cloudfoundry.org/concepts/diego/diego-components.htmlbosh vms of diego deployment, did not list a route_emitter. The downloaded manifest matched the deployment manifest, which included a route_emitter. bosh deploy started creating the missing vm, failed because route_emitter's ip address was taken. Deleted the instance. bosh deploy completed successfully. Routes restored! Time to get logging in place for this environment. Any thoughts on how/why/debugging, appreciated.
toggle quoted messageShow quoted text
On Wed, Apr 6, 2016 at 4:10 PM, Tom Sherrod <tom.sherrod(a)gmail.com> wrote: CF 230
Developer get error above for an existing application. I check another application, same error. I deploy a new application successfully, same error. bosh vms look fine. bosh cck looks fine. cf routes, yes routes are there. No errors.
I created a new shared domain successfully. Deployed another app specifying that domain, successful. Pull up the url and get the error in the subject.
What happened to the existing routes? Check of the router logs show 404. Where to look next to find out where the routes have gone??
(This is a different environment than my other issue. )
Confused, Tom
|
|
April CAB call next week on Wednesday April 13th, 2016
Hi, all, Quick reminder of the CAB call next Wednesday, April 13th @ 8a PDT. All info in link: https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit#heading=h.o44xhgvum2weRemember, no more status update but rather discussions, so come ready with your questions. Join the slack.cloudfoundry.org and join the #CAB channel for previous and future discussions. Talk to you all next week. We'll send one more reminder on this list. Best, Chip, James, and Max dr.max ibm cloud labs sillicon valley, ca Sent from my iPhone dr.max ibm cloud labs sillicon valley, ca Sent from my iPhone
|
|
Re: Cloud Controller System Domain vs App Domains
Tom Sherrod <tom.sherrod@...>
Would solution 2 allow/work with a system domain? Solution #2 means api.domain is now taken so there could be no other app deployed to api.domain. Application developers may not be happy with the limitation, requiring another domain. Defeats the purpose of solution #2? Tom On Tue, Apr 5, 2016 at 2:15 PM, Nicholas Calugar <ncalugar(a)pivotal.io> wrote: Hi John,
The point of seeding is so that there aren't two decisions (code paths) for requests to create a route. Seeding the database requires no changes to route creation logic.
Would anyone else like to comment on either of the two proposals before we make a decision and start implementing a fix?
Thanks,
Nick
On Thu, Mar 31, 2016 at 10:52 AM john mcteague <john.mcteague(a)gmail.com> wrote:
For option 2 would it not be simpler to have a single property such as cc.blacklisted_system_domain_routes that contains the desired list and have the CC deny route requests for those routes?
I don't see what storing them in the DB or creating real routes actually buys us here. Everything would be available via config.
I'm in favour of some form of option 2 for those of us who have existing deployments but would rather not make a change in either app or system domains but are exposed to this issue.
John. On 31 Mar 2016 5:47 a.m., "Nicholas Calugar" <ncalugar(a)pivotal.io> wrote:
Hi Cloud Foundry,
We've had a recurring issue brought to our attention regarding a Cloud Foundry deployment using a system_domain that is in the list of app_domains. When the system domain is in the list of app domains, a Shared Domain is created for the system domain. This is problematic because it allows users to create routes on the system domain, see this [1] recent issue as an example.
[1] https://github.com/cloudfoundry/cloud_controller_ng/issues/568
I'd like to propose two solutions and get some feedback regarding which the community would prefer. Please respond with your preferred solution and a brief reason why.
*Solution 1 - Require a Unique System Domain*
Instead of recommending a unique system domain, we would enforce this in the Cloud Controller. The proposed change is as follows:
1. REQUIRE system_domain to NOT be in the list of app_domains 2. REQUIRE a system_domain_organization
This will create a Private Domain for the system domain. Failure to configure correctly would not allow the Cloud Controller to start.
If we decide to implement this, an operator should ensure their deployment uses a unique system domain and a system_domain_organization and correct DNS entries before upgrading.
Example for BOSH-lite
- app_domains: [bosh-lite.com] - system_domain: system.bosh-lite.com - system_domain_organization: system
- api endpoint: api.system.bosh-lite.com - sample app endpoint: dora.bosh-lite.com
Example for a PaaS:
- app_domains: [yuge-paas-apps.io] - system_domain: yuge-pass.com - system_domain_organization: yuge-system-org
- api endpoint: api.yuge-paas.com - sample app endpoint: dora.yuge-paas-apps.io
*Pro: *Cloud Controller now enforces what was previously the recommended configuration for separate system and apps domains. Con: Second SSL cert for the system domain and possibly a second DNS record if system domain is not covered by the current wildcard record.
*Solution 2 - Cloud Controller Seeds System Routes*
To prevent a non-system app from requesting a hostname on a shared system domain, the Cloud Controller will take a list of hostnames and seed the database with routes. As routes are associated with a space, we will require a system_organization and system_space. An operator could choose to omit hostnames as desired.
cc.system_hostnames: description: List of hostnames for which routes will be created on the system domain. default: [api,uaa,login,doppler,loggregator,hm9000] cc.system_space: description: Space where system routes will be created. default: system
*Pro:* Significantly less change for operators running Cloud Foundry with matching system and apps domains. *Con: *Cloud Controller has knowledge of unrelated system components and the list of defaults needs to be maintained as we add and remove components.
Thanks,
-Nick -- Nicholas Calugar CAPI Product Manager Pivotal Software, Inc.
-- Nicholas Calugar CAPI Product Manager Pivotal Software, Inc.
|
|
Re: Failed to deploy diego 0.1452.0 on openstack: database_z2/0 is not running after update
Yunata, Ricky <rickyy@...>
Thank you very much George & others who have helped me. Really appreciate it!
Ricky Yunata
Please consider the environment before printing this email
toggle quoted messageShow quoted text
-----Original Message----- From: George Dean [mailto:gdean(a)pivotal.io] Sent: Thursday, 7 April 2016 2:46 AM To: cf-dev(a)lists.cloudfoundry.org Subject: [cf-dev] Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Failed to deploy diego 0.1452.0 on openstack: database_z2/0 is not running after update
Hi Ricky,
Fair enough, that sounds like an alright plan for now. If you decide to reenable SSL in the future and run into these problems again, please don't hesitate to let us know and we can try to give you a hand.
Thanks, George Disclaimer
The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.
Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.
If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com
|
|
new cflinuxfs2-rootfs-release diego integration incoming
Are you tired of cherry-picking rootfs CVE commits? Do you integrate against Diego? Are you looking to shorten your rootfs patch to prod timelines? Well do we have some good news for you! As of today, diego-release's develop branch's manifest generation is capable of consuming a cflinuxfs2-rootfs package from the buildpack team's new rootfs release < https://bosh.io/releases/github.com/cloudfoundry/cflinuxfs2-rootfs-release> . To enable this, just add the temporary [-r] flag when calling diego-release/scripts/generate-deployment-manifest or diego-release/scripts/generate-bosh-lite-manifests to specify that you wish to use your uploaded rootfs-release. This temporary flag will eventually be removed, and this will become the default behavior for diego-release, meaning that eventually you will be required to bosh upload a cflinuxfs2-rootfs-release. In the coming days, we'll be working with the Release Integration team to add the rootfs-release to their generated release compatibility matrix < https://github.com/cloudfoundry-incubator/diego-cf-compatibility>. Thanks, Connor && Andrew Diego Developers
|
|
404 not found: Requested route ('app.domain') does not exist. -- Worked fine this morning!
Tom Sherrod <tom.sherrod@...>
CF 230
Developer get error above for an existing application. I check another application, same error. I deploy a new application successfully, same error. bosh vms look fine. bosh cck looks fine. cf routes, yes routes are there. No errors.
I created a new shared domain successfully. Deployed another app specifying that domain, successful. Pull up the url and get the error in the subject.
What happened to the existing routes? Check of the router logs show 404. Where to look next to find out where the routes have gone??
(This is a different environment than my other issue. )
Confused, Tom
|
|
Re: Binary Service Broker Feature Narrative
Mike Youngstrom <youngm@...>
Thanks for the responses Mike. See inline: To be clear, I think we're happy to test third-party agent in our CI pipelines to ensure they'll continue to work. I think this addresses your "breaking change" point in a very sustainable way.
Adding thirdparty agents to the CI pipeline will help. But, if there is ever a breaking change in a buildpack we now have a loose dependency on the thirdparty broker being upgraded before I can potentially upgrade to a new cf-release. It is that kind of loose dependency breakage that I'm more concerned about. I'd like to explore whether we can meet agent requirements with profile.d and/or buildpack lifecycle hooks. If we find a compelling blocker, we can revisit this and related decisions. Hopefully we can convince you as well as agent vendors that this is a reasonable path forward.
Perhaps the buildpack lifecycle hooks you mention will be good enough of an api. Do you have anything describing the makeup of these hooks? * Integration customization:
One of the nice things about buildpacks is the very clear customization path. We use app dynamics and we have custom requirements for how app dynamics should be configured that app dynamics won't want to support. What would the story be for customizing the app dynamics broker's code that gets injected? It would be nice if there were a simple and consistent mechanism in place similar to what buildpacks already provide.
This isn't a use case we considered. Can you help me understand what kinds of customizations you're making? Specifics will help drive this conversation.
Here is our use case. App Dynamics designates an application with a location in its UI using an App Name, Tier Name, and Node Name. Our organization has placed specific standard naming conventions around what specifically the App Name and Tier Name should be. My organization also uses ServiceNow for CI management. The App Dynamics naming standard is to use specific field in the application's CI for App Name and Tier Name. We only allow applications to use app dynamics if they have a service now service bound and we configure their App Name and Tier Name with values from the service now service's credentials. That is an example. If you think other use cases should be prioritized, then maybe we can have that conversation with Danny.
I wasn't suggesting that you move focus off of this problem area and towards something else. I was just pointing out that there may be an opportunity here to kill 2 birds with one stone if we look a little broader, since the problem appears similar to me. You guys are the experts but let me brain storm a little here to perhaps help the discussion along. This proposal already introduces the new concept of some kind of "buildpack hook" contract. What if you made it possible for users to specify buildpack hooks an needs in addition to the buildpack as part of the application model? (Lowest common denominator this could be an Environment Variable) Then also allow service brokers to supply the same type of hook via VCAP_SERVICES (as this proposal proposes). It would be nice if broker hooks could be overridden by application configured hooks to cover odd use cases. Thoughts? Agent components may not be open-source (or OSS-compatible with APL2.0), and may not be licensed for general redistribution by Foundation vendors. We'd like to enable those companies to participate in the CF ecosystem.
I agree this is an issue. Does the binary redistribution aspect of the proposal cover this concern? Or are there also legal issues with code that might go into a buildpack to configure these non ASL compatible agents? Thanks for taking the time to work this through with me. Mike
|
|
Re: Failure of metron agent in cloudfoundry
Just chiming in real quick: the specific line of code that displayed that the panic is showing comes from here: https://github.com/cloudfoundry/loggregatorlib/blob/507ff1f4ef7749879a14b12fd2c42d654c99b2f2/servicediscovery/servicediscovery.go#L57Any error from the etcd storeadapter other than storeadapter.ErrorTimeout and storeadapter.ErrorKeyNotFound will cause a panic. The error that was output was "Standby internal error", but with that code being as old as it is, I honestly couldn't guess what it actually means. The modern code should attempt to reconnect instead of panicking. It also uses etcd watch events instead of polling an endpoint using ListRecursively. -Sam
toggle quoted messageShow quoted text
On Wed, Apr 6, 2016 at 1:08 PM, Bharath Posa <bharathp(a)vedams.com> wrote: Yeah. I can try. Before that can I know cause of failure for this one. Bcz it is the current deployment we are having. We want to know what exactly went wrong with it. Just curious in debugging it. :-)
Bharath
On 6 Apr 2016 23:58, "Jim CF Campbell" <jcampbell(a)pivotal.io> wrote:
Hi Barath,
That release is over a year old. Is there any chance you can upgrade to a newer release?
Jim
On Wed, Apr 6, 2016 at 12:23 PM, Bharath Posa <bharathp(a)vedams.com> wrote:
Hi Jim
Its version 206
Bharath
On 6 Apr 2016 23:25, "Jim CF Campbell" <jcampbell(a)pivotal.io> wrote:
Hi Bharath,
This looks like it might be older code. What version of Cloud Foundry
are you running?
Thanks, Jim
On Mon, Apr 4, 2016 at 11:09 PM, Bharath Posa <bharathp(a)vedams.com>
wrote:
Hi all,
we are recently running cloudfoundry in openstack. while recent saw
the metron agent in all the jobs failed . The metron.stderr.log provided the below log
------------- panic: 402: Standby Internal Error () [0]
goroutine 28 [running]:
github.com/cloudfoundry/loggregatorlib/servicediscovery.(*serverAddressList).Run(0xc208052120, 0x12a05f200)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/loggregatorlib/servicediscovery/servicediscovery.go:57 +0x3c3
created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:105 +0x1351
goroutine 1 [chan receive]: main.forwardMessagesToDoppler(0xc20802c040, 0xc2080526c0,
0xc20801ece0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:187 +0x7a
main.main()
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:107 +0x137e
goroutine 5 [syscall]: os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:21 +0x1f created by os/signal.init·1 /usr/local/go/src/os/signal/signal_unix.go:27 +0x35
goroutine 7 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151
created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 8 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151
created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 9 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151
created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 10 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151
created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 11 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151
created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 12 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151
created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 13 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151
created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 14 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151
created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 15 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151
created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 16 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151
created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 17 [select]:
github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/collectorregistrar.(*CollectorRegistrar).Run(0xc20800c0c0)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/collectorregistrar/collector_registrar.go:41 +0x201
created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:78 +0xf7d
goroutine 18 [IO wait]: net.(*pollDesc).Wait(0xc2080103e0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc2080103e0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).accept(0xc208010380, 0x0, 0x7ff2322ffbe0, 0xc20802aba8) /usr/local/go/src/net/fd_unix.go:419 +0x40b net.(*TCPListener).AcceptTCP(0xc20802e060, 0x596f54, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:234 +0x4e net/http.tcpKeepAliveListener.Accept(0xc20802e060, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1976 +0x4c net/http.(*Server).Serve(0xc2080527e0, 0x7ff232301bb8, 0xc20802e060,
0x0, 0x0)
/usr/local/go/src/net/http/server.go:1728 +0x92 net/http.(*Server).ListenAndServe(0xc2080527e0, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1718 +0x154 net/http.ListenAndServe(0xc20802abb0, 0xf, 0x7ff232301b90,
0xc20800bc50, 0x0, 0x0)
/usr/local/go/src/net/http/server.go:1808 +0xba
github.com/cloudfoundry/loggregatorlib/cfcomponent.Component.StartMonitoringEndpoints(0xc20801ece0, 0xc20802aac0, 0x9, 0x7ff232301940, 0xa393a8, 0x834ff0, 0xb, 0x0, 0x0, 0x0, ...)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/loggregatorlib/cfcomponent/component.go:76 +0x719
main.startMonitoringEndpoints(0xc20801ece0, 0xc20802aac0, 0x9,
0x7ff232301940, 0xa393a8, 0x834ff0, 0xb, 0x0, 0x0, 0x0, ...)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:118 +0x3e
created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:80 +0xfb8
goroutine 19 [IO wait]: net.(*pollDesc).Wait(0xc208010450, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010450, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).readFrom(0xc2080103f0, 0xc2080c8000, 0xffff, 0xffff,
0x0, 0x0, 0x0, 0x7ff2322ffbe0, 0xc20802acf8)
/usr/local/go/src/net/fd_unix.go:269 +0x4a1 net.(*UDPConn).ReadFromUDP(0xc20802e068, 0xc2080c8000, 0xffff,
0xffff, 0x20, 0x0, 0x0, 0x0)
/usr/local/go/src/net/udpsock_posix.go:67 +0x124 net.(*UDPConn).ReadFrom(0xc20802e068, 0xc2080c8000, 0xffff, 0xffff,
0xffff, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/udpsock_posix.go:82 +0x12e
github.com/cloudfoundry/loggregatorlib/agentlistener.(*agentListener).Start(0xc208010fc0)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/loggregatorlib/agentlistener/agent_listener.go:46 +0x3a5
created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:83 +0x101a
goroutine 20 [chan receive]:
metron/legacy_message/legacy_unmarshaller.(*legacyUnmarshaller).Run(0xc20802a9d0, 0xc208052540, 0xc2080523c0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/legacy_message/legacy_unmarshaller/legacy_unmarshaller.go:30 +0x71
created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:86 +0x1096
goroutine 21 [chan receive]:
metron/legacy_message/legacy_message_converter.(*legacyMessageConverter).Run(0xc20802e020, 0xc2080523c0, 0xc208052300)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/legacy_message/legacy_message_converter/legacy_message_converter.go:28 +0x5c
created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:87 +0x10dc
goroutine 22 [IO wait]: net.(*pollDesc).Wait(0xc208010530, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010530, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).readFrom(0xc2080104d0, 0xc2080e0000, 0xffff, 0xffff,
0x0, 0x0, 0x0, 0x7ff2322ffbe0, 0xc20802add0)
/usr/local/go/src/net/fd_unix.go:269 +0x4a1 net.(*UDPConn).ReadFromUDP(0xc20802e098, 0xc2080e0000, 0xffff,
0xffff, 0x20, 0x0, 0x0, 0x0)
/usr/local/go/src/net/udpsock_posix.go:67 +0x124 net.(*UDPConn).ReadFrom(0xc20802e098, 0xc2080e0000, 0xffff, 0xffff,
0xffff, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/udpsock_posix.go:82 +0x12e metron/eventlistener.(*eventListener).Start(0xc208060400)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/eventlistener/event_listener.go:54 +0x3a0
created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:89 +0x1108
goroutine 23 [chan receive]:
github.com/cloudfoundry/dropsonde/dropsonde_unmarshaller.(*dropsondeUnmarshaller).Run(0xc20802c080, 0xc2080525a0, 0xc208052300)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/dropsonde/dropsonde_unmarshaller/dropsonde_unmarshaller.go:65 +0x71
created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:91 +0x114e
goroutine 24 [chan receive]: metron/message_aggregator.(*messageAggregator).Run(0xc208052180,
0xc208052300, 0xc208052420)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/message_aggregator/message_aggregator.go:57 +0x5c
created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:94 +0x11ca
goroutine 25 [chan receive]: metron/varz_forwarder.(*VarzForwarder).Run(0xc20802c500,
0xc208052420, 0xc208052480)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/varz_forwarder/varz_forwarder.go:31 +0x51
created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:97 +0x1237
goroutine 26 [chan receive]:
github.com/cloudfoundry/dropsonde/dropsonde_marshaller.(*dropsondeMarshaller).Run(0xc20801e0a0, 0xc208052480, 0xc2080524e0)
/var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/dropsonde/dropsonde_marshaller/dropsonde_marshaller.go:59 +0x5f
created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:100 +0x12a3
goroutine 27 [chan receive]: main.signMessages(0xc20802b4b8, 0x8, 0xc2080524e0, 0xc2080526c0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:111 +0x7a
created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:103 +0x130e
goroutine 33 [syscall, locked to thread]: runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:2232 +0x1
goroutine 31 [IO wait]: net.(*pollDesc).Wait(0xc208010a00, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010a00, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).Read(0xc2080109a0, 0xc208059000, 0x1000, 0x1000, 0x0,
0x7ff2322ffbe0, 0xc2080daa00)
/usr/local/go/src/net/fd_unix.go:242 +0x40f net.(*conn).Read(0xc20802e150, 0xc208059000, 0x1000, 0x1000, 0x0,
0x0, 0x0)
/usr/local/go/src/net/net.go:121 +0xdc net/http.noteEOFReader.Read(0x7ff232301d08, 0xc20802e150,
0xc20805af78, 0xc208059000, 0x1000, 0x1000, 0x75b940, 0x0, 0x0)
/usr/local/go/src/net/http/transport.go:1270 +0x6e net/http.(*noteEOFReader).Read(0xc20801f2e0, 0xc208059000, 0x1000,
0x1000, 0xc208012000, 0x0, 0x0)
<autogenerated>:125 +0xd4 bufio.(*Reader).fill(0xc208053200) /usr/local/go/src/bufio/bufio.go:97 +0x1ce bufio.(*Reader).Peek(0xc208053200, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:132 +0xf0 net/http.(*persistConn).readLoop(0xc20805af20) /usr/local/go/src/net/http/transport.go:842 +0xa4 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:660 +0xc9f
goroutine 32 [select]: net/http.(*persistConn).writeLoop(0xc20805af20) /usr/local/go/src/net/http/transport.go:945 +0x41d created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:661 +0xcbc
goroutine 36 [IO wait]: net.(*pollDesc).Wait(0xc20810a3e0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc20810a3e0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).Read(0xc20810a380, 0xc2080fb000, 0x1000, 0x1000, 0x0,
0x7ff2322ffbe0, 0xc2080dad28)
/usr/local/go/src/net/fd_unix.go:242 +0x40f net.(*conn).Read(0xc20802e318, 0xc2080fb000, 0x1000, 0x1000, 0x0,
0x0, 0x0)
/usr/local/go/src/net/net.go:121 +0xdc net/http.noteEOFReader.Read(0x7ff232301d08, 0xc20802e318,
0xc20805ad68, 0xc2080fb000, 0x1000, 0x1000, 0x75b940, 0x0, 0x0)
/usr/local/go/src/net/http/transport.go:1270 +0x6e net/http.(*noteEOFReader).Read(0xc2080dd000, 0xc2080fb000, 0x1000,
0x1000, 0xc208012000, 0x0, 0x0)
<autogenerated>:125 +0xd4 bufio.(*Reader).fill(0xc2081086c0) /usr/local/go/src/bufio/bufio.go:97 +0x1ce bufio.(*Reader).Peek(0xc2081086c0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:132 +0xf0 net/http.(*persistConn).readLoop(0xc20805ad10) /usr/local/go/src/net/http/transport.go:842 +0xa4 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:660 +0xc9f
goroutine 37 [select]: net/http.(*persistConn).writeLoop(0xc20805ad10) /usr/local/go/src/net/http/transport.go:945 +0x41d created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:661 +0xcbc ----------------------
I tried to restart it using monit restart still it is failing . any body got idea on this error
regards Bharath
-- Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
-- Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963
|
|
Re: Persistent Volumes on Cloud Foundry
Yes like James said, assuming a Consistent/Service Scheduler is running in alongside our High Availability Scheduler, you could run databases like PostreSQL and Redis. They would use network attached storage, or local storage if the deployment supports that scenario.
toggle quoted messageShow quoted text
On Wed, Apr 6, 2016 at 11:22 AM, James Bayer <jbayer(a)pivotal.io> wrote: nic,
if you look at this section of the doc [1] it discusses "Reattachable Volumes" which are similar to EBS volumes attaching to EC2 instances without allowing multiple instances to be bound to the same volume at the same time. that likely aligns better with a database style use case (although performance of remote volumes will certainly be a consideration depending on the performance requirements). there is some new diego scheduling work required for "Reattachable Volumes".
"Distributed Filesystem" (same volume attached to multiple instances at the same time like NFS) and "Scratch" (temporary extra disk) are use cases which are planned to be supported first as they do not require diego scheduler changes to use now. the doc also references snapshots as something that may be needed for this use case. there is discussion going on in the doc if you want to continue it there.
[1] https://docs.google.com/document/d/1FPTOI1Wqhceh_7SsSICuhhosbtCw6PDCEvLbsxrep0A/edit#heading=h.mxp2p82umj8r
On Tue, Apr 5, 2016 at 8:02 PM, Dr Nic Williams <drnicwilliams(a)gmail.com> wrote:
Is a goal of this work to run pure data services like PostgreSQL or Redis inside "application instances"?
On Tue, Apr 5, 2016 at 5:01 PM -0700, "Ted Young" <tyoung(a)pivotal.io> wrote:
This proposal describes the changes necessary to the Service Broker API
to utilize the new Volume Management features of the Diego runtime. It contains examples of how services can provide a variety of persistent data access to CF applications, where each service maintains control of the volume lifecycle. This is to allow some services that provide blank storage space to applications, and other services that provide access to complex or externally managed data (such as IBM's Watson).
http://bit.ly/cf-volume-proposal
We are moving fast on delivering a beta of this feature, so please have a look and give feedback now if this is of interest to you. More detail will be added to the proposal as necessary.
Cheers,
Ted Young Senior Engineer / Product Manager Pivotal Cloud Foundry
-- Thank you,
James Bayer
|
|
Re: Failure of metron agent in cloudfoundry
Yeah. I can try. Before that can I know cause of failure for this one. Bcz it is the current deployment we are having. We want to know what exactly went wrong with it. Just curious in debugging it. :-) Bharath On 6 Apr 2016 23:58, "Jim CF Campbell" <jcampbell(a)pivotal.io> wrote: Hi Barath,
That release is over a year old. Is there any chance you can upgrade to a
newer release? Jim
On Wed, Apr 6, 2016 at 12:23 PM, Bharath Posa <bharathp(a)vedams.com> wrote:
Hi Jim
Its version 206
Bharath
On 6 Apr 2016 23:25, "Jim CF Campbell" <jcampbell(a)pivotal.io> wrote:
Hi Bharath,
This looks like it might be older code. What version of Cloud Foundry
are you running? Thanks, Jim
On Mon, Apr 4, 2016 at 11:09 PM, Bharath Posa <bharathp(a)vedams.com>
wrote: Hi all,
we are recently running cloudfoundry in openstack. while recent saw
the metron agent in all the jobs failed . The metron.stderr.log provided the below log ------------- panic: 402: Standby Internal Error () [0]
goroutine 28 [running]:
github.com/cloudfoundry/loggregatorlib/servicediscovery.(*serverAddressList).Run(0xc208052120, 0x12a05f200) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/loggregatorlib/servicediscovery/servicediscovery.go:57 +0x3c3 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:105 +0x1351 goroutine 1 [chan receive]: main.forwardMessagesToDoppler(0xc20802c040, 0xc2080526c0, 0xc20801ece0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:187 +0x7a main.main()
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:107 +0x137e goroutine 5 [syscall]: os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:21 +0x1f created by os/signal.init·1 /usr/local/go/src/os/signal/signal_unix.go:27 +0x35
goroutine 7 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189 goroutine 8 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189 goroutine 9 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189 goroutine 10 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189 goroutine 11 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189 goroutine 12 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189 goroutine 13 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189 goroutine 14 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189 goroutine 15 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189 goroutine 16 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600,
0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189 goroutine 17 [select]:
github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/collectorregistrar.(*CollectorRegistrar).Run(0xc20800c0c0) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/collectorregistrar/collector_registrar.go:41 +0x201 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:78 +0xf7d goroutine 18 [IO wait]: net.(*pollDesc).Wait(0xc2080103e0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc2080103e0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).accept(0xc208010380, 0x0, 0x7ff2322ffbe0, 0xc20802aba8) /usr/local/go/src/net/fd_unix.go:419 +0x40b net.(*TCPListener).AcceptTCP(0xc20802e060, 0x596f54, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:234 +0x4e net/http.tcpKeepAliveListener.Accept(0xc20802e060, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1976 +0x4c net/http.(*Server).Serve(0xc2080527e0, 0x7ff232301bb8, 0xc20802e060,
0x0, 0x0) /usr/local/go/src/net/http/server.go:1728 +0x92 net/http.(*Server).ListenAndServe(0xc2080527e0, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1718 +0x154 net/http.ListenAndServe(0xc20802abb0, 0xf, 0x7ff232301b90,
0xc20800bc50, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1808 +0xba
github.com/cloudfoundry/loggregatorlib/cfcomponent.Component.StartMonitoringEndpoints(0xc20801ece0, 0xc20802aac0, 0x9, 0x7ff232301940, 0xa393a8, 0x834ff0, 0xb, 0x0, 0x0, 0x0, ...) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/loggregatorlib/cfcomponent/component.go:76 +0x719 main.startMonitoringEndpoints(0xc20801ece0, 0xc20802aac0, 0x9,
0x7ff232301940, 0xa393a8, 0x834ff0, 0xb, 0x0, 0x0, 0x0, ...)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:118 +0x3e created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:80 +0xfb8 goroutine 19 [IO wait]: net.(*pollDesc).Wait(0xc208010450, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010450, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).readFrom(0xc2080103f0, 0xc2080c8000, 0xffff, 0xffff, 0x0,
0x0, 0x0, 0x7ff2322ffbe0, 0xc20802acf8) /usr/local/go/src/net/fd_unix.go:269 +0x4a1 net.(*UDPConn).ReadFromUDP(0xc20802e068, 0xc2080c8000, 0xffff, 0xffff,
0x20, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:67 +0x124 net.(*UDPConn).ReadFrom(0xc20802e068, 0xc2080c8000, 0xffff, 0xffff,
0xffff, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:82 +0x12e
github.com/cloudfoundry/loggregatorlib/agentlistener.(*agentListener).Start(0xc208010fc0) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/loggregatorlib/agentlistener/agent_listener.go:46 +0x3a5 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:83 +0x101a goroutine 20 [chan receive]:
metron/legacy_message/legacy_unmarshaller.(*legacyUnmarshaller).Run(0xc20802a9d0, 0xc208052540, 0xc2080523c0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/legacy_message/legacy_unmarshaller/legacy_unmarshaller.go:30 +0x71 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:86 +0x1096 goroutine 21 [chan receive]:
metron/legacy_message/legacy_message_converter.(*legacyMessageConverter).Run(0xc20802e020, 0xc2080523c0, 0xc208052300)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/legacy_message/legacy_message_converter/legacy_message_converter.go:28 +0x5c created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:87 +0x10dc goroutine 22 [IO wait]: net.(*pollDesc).Wait(0xc208010530, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010530, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).readFrom(0xc2080104d0, 0xc2080e0000, 0xffff, 0xffff, 0x0,
0x0, 0x0, 0x7ff2322ffbe0, 0xc20802add0) /usr/local/go/src/net/fd_unix.go:269 +0x4a1 net.(*UDPConn).ReadFromUDP(0xc20802e098, 0xc2080e0000, 0xffff, 0xffff,
0x20, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:67 +0x124 net.(*UDPConn).ReadFrom(0xc20802e098, 0xc2080e0000, 0xffff, 0xffff,
0xffff, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:82 +0x12e metron/eventlistener.(*eventListener).Start(0xc208060400)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/eventlistener/event_listener.go:54 +0x3a0 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:89 +0x1108 goroutine 23 [chan receive]:
github.com/cloudfoundry/dropsonde/dropsonde_unmarshaller.(*dropsondeUnmarshaller).Run(0xc20802c080, 0xc2080525a0, 0xc208052300) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/dropsonde/dropsonde_unmarshaller/dropsonde_unmarshaller.go:65 +0x71 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:91 +0x114e goroutine 24 [chan receive]: metron/message_aggregator.(*messageAggregator).Run(0xc208052180,
0xc208052300, 0xc208052420)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/message_aggregator/message_aggregator.go:57 +0x5c created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:94 +0x11ca goroutine 25 [chan receive]: metron/varz_forwarder.(*VarzForwarder).Run(0xc20802c500, 0xc208052420,
0xc208052480)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/varz_forwarder/varz_forwarder.go:31 +0x51 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:97 +0x1237 goroutine 26 [chan receive]:
github.com/cloudfoundry/dropsonde/dropsonde_marshaller.(*dropsondeMarshaller).Run(0xc20801e0a0, 0xc208052480, 0xc2080524e0) /var/vcap/data/compile/metron_agent/loggregator/src/
github.com/cloudfoundry/dropsonde/dropsonde_marshaller/dropsonde_marshaller.go:59 +0x5f created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:100 +0x12a3 goroutine 27 [chan receive]: main.signMessages(0xc20802b4b8, 0x8, 0xc2080524e0, 0xc2080526c0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:111 +0x7a created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:103 +0x130e goroutine 33 [syscall, locked to thread]: runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:2232 +0x1
goroutine 31 [IO wait]: net.(*pollDesc).Wait(0xc208010a00, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010a00, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).Read(0xc2080109a0, 0xc208059000, 0x1000, 0x1000, 0x0,
0x7ff2322ffbe0, 0xc2080daa00) /usr/local/go/src/net/fd_unix.go:242 +0x40f net.(*conn).Read(0xc20802e150, 0xc208059000, 0x1000, 0x1000, 0x0, 0x0,
0x0) /usr/local/go/src/net/net.go:121 +0xdc net/http.noteEOFReader.Read(0x7ff232301d08, 0xc20802e150,
0xc20805af78, 0xc208059000, 0x1000, 0x1000, 0x75b940, 0x0, 0x0) /usr/local/go/src/net/http/transport.go:1270 +0x6e net/http.(*noteEOFReader).Read(0xc20801f2e0, 0xc208059000, 0x1000,
0x1000, 0xc208012000, 0x0, 0x0) <autogenerated>:125 +0xd4 bufio.(*Reader).fill(0xc208053200) /usr/local/go/src/bufio/bufio.go:97 +0x1ce bufio.(*Reader).Peek(0xc208053200, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:132 +0xf0 net/http.(*persistConn).readLoop(0xc20805af20) /usr/local/go/src/net/http/transport.go:842 +0xa4 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:660 +0xc9f
goroutine 32 [select]: net/http.(*persistConn).writeLoop(0xc20805af20) /usr/local/go/src/net/http/transport.go:945 +0x41d created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:661 +0xcbc
goroutine 36 [IO wait]: net.(*pollDesc).Wait(0xc20810a3e0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc20810a3e0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).Read(0xc20810a380, 0xc2080fb000, 0x1000, 0x1000, 0x0,
0x7ff2322ffbe0, 0xc2080dad28) /usr/local/go/src/net/fd_unix.go:242 +0x40f net.(*conn).Read(0xc20802e318, 0xc2080fb000, 0x1000, 0x1000, 0x0, 0x0,
0x0) /usr/local/go/src/net/net.go:121 +0xdc net/http.noteEOFReader.Read(0x7ff232301d08, 0xc20802e318,
0xc20805ad68, 0xc2080fb000, 0x1000, 0x1000, 0x75b940, 0x0, 0x0) /usr/local/go/src/net/http/transport.go:1270 +0x6e net/http.(*noteEOFReader).Read(0xc2080dd000, 0xc2080fb000, 0x1000,
0x1000, 0xc208012000, 0x0, 0x0) <autogenerated>:125 +0xd4 bufio.(*Reader).fill(0xc2081086c0) /usr/local/go/src/bufio/bufio.go:97 +0x1ce bufio.(*Reader).Peek(0xc2081086c0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:132 +0xf0 net/http.(*persistConn).readLoop(0xc20805ad10) /usr/local/go/src/net/http/transport.go:842 +0xa4 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:660 +0xc9f
goroutine 37 [select]: net/http.(*persistConn).writeLoop(0xc20805ad10) /usr/local/go/src/net/http/transport.go:945 +0x41d created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:661 +0xcbc ----------------------
I tried to restart it using monit restart still it is failing . any body got idea on this error
regards Bharath
-- Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
-- Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963
|
|
Re: Failure of metron agent in cloudfoundry
Hi Barath,
That release is over a year old. Is there any chance you can upgrade to a newer release?
Jim
toggle quoted messageShow quoted text
On Wed, Apr 6, 2016 at 12:23 PM, Bharath Posa <bharathp(a)vedams.com> wrote: Hi Jim
Its version 206
Bharath On 6 Apr 2016 23:25, "Jim CF Campbell" <jcampbell(a)pivotal.io> wrote:
Hi Bharath,
This looks like it might be older code. What version of Cloud Foundry are you running?
Thanks, Jim
On Mon, Apr 4, 2016 at 11:09 PM, Bharath Posa <bharathp(a)vedams.com> wrote:
Hi all,
we are recently running cloudfoundry in openstack. while recent saw the metron agent in all the jobs failed . The metron.stderr.log provided the below log
------------- panic: 402: Standby Internal Error () [0]
goroutine 28 [running]:
github.com/cloudfoundry/loggregatorlib/servicediscovery.(*serverAddressList).Run(0xc208052120, 0x12a05f200) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/loggregatorlib/servicediscovery/servicediscovery.go:57 +0x3c3 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:105 +0x1351
goroutine 1 [chan receive]: main.forwardMessagesToDoppler(0xc20802c040, 0xc2080526c0, 0xc20801ece0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:187 +0x7a main.main()
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:107 +0x137e
goroutine 5 [syscall]: os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:21 +0x1f created by os/signal.init·1 /usr/local/go/src/os/signal/signal_unix.go:27 +0x35
goroutine 7 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 8 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 9 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 10 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 11 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 12 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 13 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 14 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 15 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 16 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 17 [select]:
github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/collectorregistrar.(*CollectorRegistrar).Run(0xc20800c0c0) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/collectorregistrar/collector_registrar.go:41 +0x201 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:78 +0xf7d
goroutine 18 [IO wait]: net.(*pollDesc).Wait(0xc2080103e0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc2080103e0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).accept(0xc208010380, 0x0, 0x7ff2322ffbe0, 0xc20802aba8) /usr/local/go/src/net/fd_unix.go:419 +0x40b net.(*TCPListener).AcceptTCP(0xc20802e060, 0x596f54, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:234 +0x4e net/http.tcpKeepAliveListener.Accept(0xc20802e060, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1976 +0x4c net/http.(*Server).Serve(0xc2080527e0, 0x7ff232301bb8, 0xc20802e060, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1728 +0x92 net/http.(*Server).ListenAndServe(0xc2080527e0, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1718 +0x154 net/http.ListenAndServe(0xc20802abb0, 0xf, 0x7ff232301b90, 0xc20800bc50, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1808 +0xba
github.com/cloudfoundry/loggregatorlib/cfcomponent.Component.StartMonitoringEndpoints(0xc20801ece0, 0xc20802aac0, 0x9, 0x7ff232301940, 0xa393a8, 0x834ff0, 0xb, 0x0, 0x0, 0x0, ...) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/loggregatorlib/cfcomponent/component.go:76 +0x719 main.startMonitoringEndpoints(0xc20801ece0, 0xc20802aac0, 0x9, 0x7ff232301940, 0xa393a8, 0x834ff0, 0xb, 0x0, 0x0, 0x0, ...)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:118 +0x3e created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:80 +0xfb8
goroutine 19 [IO wait]: net.(*pollDesc).Wait(0xc208010450, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010450, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).readFrom(0xc2080103f0, 0xc2080c8000, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x7ff2322ffbe0, 0xc20802acf8) /usr/local/go/src/net/fd_unix.go:269 +0x4a1 net.(*UDPConn).ReadFromUDP(0xc20802e068, 0xc2080c8000, 0xffff, 0xffff, 0x20, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:67 +0x124 net.(*UDPConn).ReadFrom(0xc20802e068, 0xc2080c8000, 0xffff, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:82 +0x12e
github.com/cloudfoundry/loggregatorlib/agentlistener.(*agentListener).Start(0xc208010fc0) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/loggregatorlib/agentlistener/agent_listener.go:46 +0x3a5 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:83 +0x101a
goroutine 20 [chan receive]: metron/legacy_message/legacy_unmarshaller.(*legacyUnmarshaller).Run(0xc20802a9d0, 0xc208052540, 0xc2080523c0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/legacy_message/legacy_unmarshaller/legacy_unmarshaller.go:30 +0x71 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:86 +0x1096
goroutine 21 [chan receive]: metron/legacy_message/legacy_message_converter.(*legacyMessageConverter).Run(0xc20802e020, 0xc2080523c0, 0xc208052300)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/legacy_message/legacy_message_converter/legacy_message_converter.go:28 +0x5c created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:87 +0x10dc
goroutine 22 [IO wait]: net.(*pollDesc).Wait(0xc208010530, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010530, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).readFrom(0xc2080104d0, 0xc2080e0000, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x7ff2322ffbe0, 0xc20802add0) /usr/local/go/src/net/fd_unix.go:269 +0x4a1 net.(*UDPConn).ReadFromUDP(0xc20802e098, 0xc2080e0000, 0xffff, 0xffff, 0x20, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:67 +0x124 net.(*UDPConn).ReadFrom(0xc20802e098, 0xc2080e0000, 0xffff, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:82 +0x12e metron/eventlistener.(*eventListener).Start(0xc208060400)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/eventlistener/event_listener.go:54 +0x3a0 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:89 +0x1108
goroutine 23 [chan receive]:
github.com/cloudfoundry/dropsonde/dropsonde_unmarshaller.(*dropsondeUnmarshaller).Run(0xc20802c080, 0xc2080525a0, 0xc208052300) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/dropsonde/dropsonde_unmarshaller/dropsonde_unmarshaller.go:65 +0x71 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:91 +0x114e
goroutine 24 [chan receive]: metron/message_aggregator.(*messageAggregator).Run(0xc208052180, 0xc208052300, 0xc208052420)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/message_aggregator/message_aggregator.go:57 +0x5c created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:94 +0x11ca
goroutine 25 [chan receive]: metron/varz_forwarder.(*VarzForwarder).Run(0xc20802c500, 0xc208052420, 0xc208052480)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/varz_forwarder/varz_forwarder.go:31 +0x51 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:97 +0x1237
goroutine 26 [chan receive]:
github.com/cloudfoundry/dropsonde/dropsonde_marshaller.(*dropsondeMarshaller).Run(0xc20801e0a0, 0xc208052480, 0xc2080524e0) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/dropsonde/dropsonde_marshaller/dropsonde_marshaller.go:59 +0x5f created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:100 +0x12a3
goroutine 27 [chan receive]: main.signMessages(0xc20802b4b8, 0x8, 0xc2080524e0, 0xc2080526c0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:111 +0x7a created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:103 +0x130e
goroutine 33 [syscall, locked to thread]: runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:2232 +0x1
goroutine 31 [IO wait]: net.(*pollDesc).Wait(0xc208010a00, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010a00, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).Read(0xc2080109a0, 0xc208059000, 0x1000, 0x1000, 0x0, 0x7ff2322ffbe0, 0xc2080daa00) /usr/local/go/src/net/fd_unix.go:242 +0x40f net.(*conn).Read(0xc20802e150, 0xc208059000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:121 +0xdc net/http.noteEOFReader.Read(0x7ff232301d08, 0xc20802e150, 0xc20805af78, 0xc208059000, 0x1000, 0x1000, 0x75b940, 0x0, 0x0) /usr/local/go/src/net/http/transport.go:1270 +0x6e net/http.(*noteEOFReader).Read(0xc20801f2e0, 0xc208059000, 0x1000, 0x1000, 0xc208012000, 0x0, 0x0) <autogenerated>:125 +0xd4 bufio.(*Reader).fill(0xc208053200) /usr/local/go/src/bufio/bufio.go:97 +0x1ce bufio.(*Reader).Peek(0xc208053200, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:132 +0xf0 net/http.(*persistConn).readLoop(0xc20805af20) /usr/local/go/src/net/http/transport.go:842 +0xa4 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:660 +0xc9f
goroutine 32 [select]: net/http.(*persistConn).writeLoop(0xc20805af20) /usr/local/go/src/net/http/transport.go:945 +0x41d created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:661 +0xcbc
goroutine 36 [IO wait]: net.(*pollDesc).Wait(0xc20810a3e0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc20810a3e0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).Read(0xc20810a380, 0xc2080fb000, 0x1000, 0x1000, 0x0, 0x7ff2322ffbe0, 0xc2080dad28) /usr/local/go/src/net/fd_unix.go:242 +0x40f net.(*conn).Read(0xc20802e318, 0xc2080fb000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:121 +0xdc net/http.noteEOFReader.Read(0x7ff232301d08, 0xc20802e318, 0xc20805ad68, 0xc2080fb000, 0x1000, 0x1000, 0x75b940, 0x0, 0x0) /usr/local/go/src/net/http/transport.go:1270 +0x6e net/http.(*noteEOFReader).Read(0xc2080dd000, 0xc2080fb000, 0x1000, 0x1000, 0xc208012000, 0x0, 0x0) <autogenerated>:125 +0xd4 bufio.(*Reader).fill(0xc2081086c0) /usr/local/go/src/bufio/bufio.go:97 +0x1ce bufio.(*Reader).Peek(0xc2081086c0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:132 +0xf0 net/http.(*persistConn).readLoop(0xc20805ad10) /usr/local/go/src/net/http/transport.go:842 +0xa4 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:660 +0xc9f
goroutine 37 [select]: net/http.(*persistConn).writeLoop(0xc20805ad10) /usr/local/go/src/net/http/transport.go:945 +0x41d created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:661 +0xcbc ----------------------
I tried to restart it using monit restart still it is failing . any body got idea on this error
regards Bharath
-- Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963
-- Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963
|
|
Re: Failure of metron agent in cloudfoundry
Hi Jim
Its version 206
Bharath
toggle quoted messageShow quoted text
On 6 Apr 2016 23:25, "Jim CF Campbell" <jcampbell(a)pivotal.io> wrote: Hi Bharath,
This looks like it might be older code. What version of Cloud Foundry are you running?
Thanks, Jim
On Mon, Apr 4, 2016 at 11:09 PM, Bharath Posa <bharathp(a)vedams.com> wrote:
Hi all,
we are recently running cloudfoundry in openstack. while recent saw the metron agent in all the jobs failed . The metron.stderr.log provided the below log
------------- panic: 402: Standby Internal Error () [0]
goroutine 28 [running]:
github.com/cloudfoundry/loggregatorlib/servicediscovery.(*serverAddressList).Run(0xc208052120, 0x12a05f200) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/loggregatorlib/servicediscovery/servicediscovery.go:57 +0x3c3 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:105 +0x1351
goroutine 1 [chan receive]: main.forwardMessagesToDoppler(0xc20802c040, 0xc2080526c0, 0xc20801ece0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:187 +0x7a main.main()
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:107 +0x137e
goroutine 5 [syscall]: os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:21 +0x1f created by os/signal.init·1 /usr/local/go/src/os/signal/signal_unix.go:27 +0x35
goroutine 7 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 8 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 9 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 10 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 11 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 12 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 13 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 14 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 15 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 16 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 17 [select]:
github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/collectorregistrar.(*CollectorRegistrar).Run(0xc20800c0c0) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/collectorregistrar/collector_registrar.go:41 +0x201 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:78 +0xf7d
goroutine 18 [IO wait]: net.(*pollDesc).Wait(0xc2080103e0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc2080103e0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).accept(0xc208010380, 0x0, 0x7ff2322ffbe0, 0xc20802aba8) /usr/local/go/src/net/fd_unix.go:419 +0x40b net.(*TCPListener).AcceptTCP(0xc20802e060, 0x596f54, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:234 +0x4e net/http.tcpKeepAliveListener.Accept(0xc20802e060, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1976 +0x4c net/http.(*Server).Serve(0xc2080527e0, 0x7ff232301bb8, 0xc20802e060, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1728 +0x92 net/http.(*Server).ListenAndServe(0xc2080527e0, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1718 +0x154 net/http.ListenAndServe(0xc20802abb0, 0xf, 0x7ff232301b90, 0xc20800bc50, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1808 +0xba
github.com/cloudfoundry/loggregatorlib/cfcomponent.Component.StartMonitoringEndpoints(0xc20801ece0, 0xc20802aac0, 0x9, 0x7ff232301940, 0xa393a8, 0x834ff0, 0xb, 0x0, 0x0, 0x0, ...) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/loggregatorlib/cfcomponent/component.go:76 +0x719 main.startMonitoringEndpoints(0xc20801ece0, 0xc20802aac0, 0x9, 0x7ff232301940, 0xa393a8, 0x834ff0, 0xb, 0x0, 0x0, 0x0, ...)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:118 +0x3e created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:80 +0xfb8
goroutine 19 [IO wait]: net.(*pollDesc).Wait(0xc208010450, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010450, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).readFrom(0xc2080103f0, 0xc2080c8000, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x7ff2322ffbe0, 0xc20802acf8) /usr/local/go/src/net/fd_unix.go:269 +0x4a1 net.(*UDPConn).ReadFromUDP(0xc20802e068, 0xc2080c8000, 0xffff, 0xffff, 0x20, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:67 +0x124 net.(*UDPConn).ReadFrom(0xc20802e068, 0xc2080c8000, 0xffff, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:82 +0x12e
github.com/cloudfoundry/loggregatorlib/agentlistener.(*agentListener).Start(0xc208010fc0) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/loggregatorlib/agentlistener/agent_listener.go:46 +0x3a5 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:83 +0x101a
goroutine 20 [chan receive]: metron/legacy_message/legacy_unmarshaller.(*legacyUnmarshaller).Run(0xc20802a9d0, 0xc208052540, 0xc2080523c0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/legacy_message/legacy_unmarshaller/legacy_unmarshaller.go:30 +0x71 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:86 +0x1096
goroutine 21 [chan receive]: metron/legacy_message/legacy_message_converter.(*legacyMessageConverter).Run(0xc20802e020, 0xc2080523c0, 0xc208052300)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/legacy_message/legacy_message_converter/legacy_message_converter.go:28 +0x5c created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:87 +0x10dc
goroutine 22 [IO wait]: net.(*pollDesc).Wait(0xc208010530, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010530, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).readFrom(0xc2080104d0, 0xc2080e0000, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x7ff2322ffbe0, 0xc20802add0) /usr/local/go/src/net/fd_unix.go:269 +0x4a1 net.(*UDPConn).ReadFromUDP(0xc20802e098, 0xc2080e0000, 0xffff, 0xffff, 0x20, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:67 +0x124 net.(*UDPConn).ReadFrom(0xc20802e098, 0xc2080e0000, 0xffff, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:82 +0x12e metron/eventlistener.(*eventListener).Start(0xc208060400)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/eventlistener/event_listener.go:54 +0x3a0 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:89 +0x1108
goroutine 23 [chan receive]:
github.com/cloudfoundry/dropsonde/dropsonde_unmarshaller.(*dropsondeUnmarshaller).Run(0xc20802c080, 0xc2080525a0, 0xc208052300) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/dropsonde/dropsonde_unmarshaller/dropsonde_unmarshaller.go:65 +0x71 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:91 +0x114e
goroutine 24 [chan receive]: metron/message_aggregator.(*messageAggregator).Run(0xc208052180, 0xc208052300, 0xc208052420)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/message_aggregator/message_aggregator.go:57 +0x5c created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:94 +0x11ca
goroutine 25 [chan receive]: metron/varz_forwarder.(*VarzForwarder).Run(0xc20802c500, 0xc208052420, 0xc208052480)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/varz_forwarder/varz_forwarder.go:31 +0x51 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:97 +0x1237
goroutine 26 [chan receive]:
github.com/cloudfoundry/dropsonde/dropsonde_marshaller.(*dropsondeMarshaller).Run(0xc20801e0a0, 0xc208052480, 0xc2080524e0) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/dropsonde/dropsonde_marshaller/dropsonde_marshaller.go:59 +0x5f created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:100 +0x12a3
goroutine 27 [chan receive]: main.signMessages(0xc20802b4b8, 0x8, 0xc2080524e0, 0xc2080526c0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:111 +0x7a created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:103 +0x130e
goroutine 33 [syscall, locked to thread]: runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:2232 +0x1
goroutine 31 [IO wait]: net.(*pollDesc).Wait(0xc208010a00, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010a00, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).Read(0xc2080109a0, 0xc208059000, 0x1000, 0x1000, 0x0, 0x7ff2322ffbe0, 0xc2080daa00) /usr/local/go/src/net/fd_unix.go:242 +0x40f net.(*conn).Read(0xc20802e150, 0xc208059000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:121 +0xdc net/http.noteEOFReader.Read(0x7ff232301d08, 0xc20802e150, 0xc20805af78, 0xc208059000, 0x1000, 0x1000, 0x75b940, 0x0, 0x0) /usr/local/go/src/net/http/transport.go:1270 +0x6e net/http.(*noteEOFReader).Read(0xc20801f2e0, 0xc208059000, 0x1000, 0x1000, 0xc208012000, 0x0, 0x0) <autogenerated>:125 +0xd4 bufio.(*Reader).fill(0xc208053200) /usr/local/go/src/bufio/bufio.go:97 +0x1ce bufio.(*Reader).Peek(0xc208053200, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:132 +0xf0 net/http.(*persistConn).readLoop(0xc20805af20) /usr/local/go/src/net/http/transport.go:842 +0xa4 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:660 +0xc9f
goroutine 32 [select]: net/http.(*persistConn).writeLoop(0xc20805af20) /usr/local/go/src/net/http/transport.go:945 +0x41d created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:661 +0xcbc
goroutine 36 [IO wait]: net.(*pollDesc).Wait(0xc20810a3e0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc20810a3e0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).Read(0xc20810a380, 0xc2080fb000, 0x1000, 0x1000, 0x0, 0x7ff2322ffbe0, 0xc2080dad28) /usr/local/go/src/net/fd_unix.go:242 +0x40f net.(*conn).Read(0xc20802e318, 0xc2080fb000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:121 +0xdc net/http.noteEOFReader.Read(0x7ff232301d08, 0xc20802e318, 0xc20805ad68, 0xc2080fb000, 0x1000, 0x1000, 0x75b940, 0x0, 0x0) /usr/local/go/src/net/http/transport.go:1270 +0x6e net/http.(*noteEOFReader).Read(0xc2080dd000, 0xc2080fb000, 0x1000, 0x1000, 0xc208012000, 0x0, 0x0) <autogenerated>:125 +0xd4 bufio.(*Reader).fill(0xc2081086c0) /usr/local/go/src/bufio/bufio.go:97 +0x1ce bufio.(*Reader).Peek(0xc2081086c0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:132 +0xf0 net/http.(*persistConn).readLoop(0xc20805ad10) /usr/local/go/src/net/http/transport.go:842 +0xa4 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:660 +0xc9f
goroutine 37 [select]: net/http.(*persistConn).writeLoop(0xc20805ad10) /usr/local/go/src/net/http/transport.go:945 +0x41d created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:661 +0xcbc ----------------------
I tried to restart it using monit restart still it is failing . any body got idea on this error
regards Bharath
-- Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963
|
|
Re: Persistent Volumes on Cloud Foundry
nic, if you look at this section of the doc [1] it discusses "Reattachable Volumes" which are similar to EBS volumes attaching to EC2 instances without allowing multiple instances to be bound to the same volume at the same time. that likely aligns better with a database style use case (although performance of remote volumes will certainly be a consideration depending on the performance requirements). there is some new diego scheduling work required for "Reattachable Volumes". "Distributed Filesystem" (same volume attached to multiple instances at the same time like NFS) and "Scratch" (temporary extra disk) are use cases which are planned to be supported first as they do not require diego scheduler changes to use now. the doc also references snapshots as something that may be needed for this use case. there is discussion going on in the doc if you want to continue it there. [1] https://docs.google.com/document/d/1FPTOI1Wqhceh_7SsSICuhhosbtCw6PDCEvLbsxrep0A/edit#heading=h.mxp2p82umj8rOn Tue, Apr 5, 2016 at 8:02 PM, Dr Nic Williams <drnicwilliams(a)gmail.com> wrote: Is a goal of this work to run pure data services like PostgreSQL or Redis inside "application instances"?
On Tue, Apr 5, 2016 at 5:01 PM -0700, "Ted Young" <tyoung(a)pivotal.io> wrote:
This proposal describes the changes necessary to the Service Broker API to
utilize the new Volume Management features of the Diego runtime. It contains examples of how services can provide a variety of persistent data access to CF applications, where each service maintains control of the volume lifecycle. This is to allow some services that provide blank storage space to applications, and other services that provide access to complex or externally managed data (such as IBM's Watson).
http://bit.ly/cf-volume-proposal
We are moving fast on delivering a beta of this feature, so please have a look and give feedback now if this is of interest to you. More detail will be added to the proposal as necessary.
Cheers,
Ted Young Senior Engineer / Product Manager Pivotal Cloud Foundry
-- Thank you, James Bayer
|
|
Re: Failure of metron agent in cloudfoundry
Hi Bharath,
This looks like it might be older code. What version of Cloud Foundry are you running?
Thanks, Jim
toggle quoted messageShow quoted text
On Mon, Apr 4, 2016 at 11:09 PM, Bharath Posa <bharathp(a)vedams.com> wrote: Hi all,
we are recently running cloudfoundry in openstack. while recent saw the metron agent in all the jobs failed . The metron.stderr.log provided the below log
------------- panic: 402: Standby Internal Error () [0]
goroutine 28 [running]:
github.com/cloudfoundry/loggregatorlib/servicediscovery.(*serverAddressList).Run(0xc208052120, 0x12a05f200) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/loggregatorlib/servicediscovery/servicediscovery.go:57 +0x3c3 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:105 +0x1351
goroutine 1 [chan receive]: main.forwardMessagesToDoppler(0xc20802c040, 0xc2080526c0, 0xc20801ece0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:187 +0x7a main.main()
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:107 +0x137e
goroutine 5 [syscall]: os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:21 +0x1f created by os/signal.init·1 /usr/local/go/src/os/signal/signal_unix.go:27 +0x35
goroutine 7 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 8 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 9 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 10 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 11 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 12 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 13 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 14 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 15 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 16 [select]: github.com/cloudfoundry/gunk/workpool.worker(0xc208052600, 0xc20800bd40) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:71 +0x151 created by github.com/cloudfoundry/gunk/workpool.New /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/gunk/workpool/workpool.go:44 +0x189
goroutine 17 [select]:
github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/collectorregistrar.(*CollectorRegistrar).Run(0xc20800c0c0) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/collectorregistrar/collector_registrar.go:41 +0x201 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:78 +0xf7d
goroutine 18 [IO wait]: net.(*pollDesc).Wait(0xc2080103e0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc2080103e0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).accept(0xc208010380, 0x0, 0x7ff2322ffbe0, 0xc20802aba8) /usr/local/go/src/net/fd_unix.go:419 +0x40b net.(*TCPListener).AcceptTCP(0xc20802e060, 0x596f54, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:234 +0x4e net/http.tcpKeepAliveListener.Accept(0xc20802e060, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1976 +0x4c net/http.(*Server).Serve(0xc2080527e0, 0x7ff232301bb8, 0xc20802e060, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1728 +0x92 net/http.(*Server).ListenAndServe(0xc2080527e0, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1718 +0x154 net/http.ListenAndServe(0xc20802abb0, 0xf, 0x7ff232301b90, 0xc20800bc50, 0x0, 0x0) /usr/local/go/src/net/http/server.go:1808 +0xba
github.com/cloudfoundry/loggregatorlib/cfcomponent.Component.StartMonitoringEndpoints(0xc20801ece0, 0xc20802aac0, 0x9, 0x7ff232301940, 0xa393a8, 0x834ff0, 0xb, 0x0, 0x0, 0x0, ...) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/loggregatorlib/cfcomponent/component.go:76 +0x719 main.startMonitoringEndpoints(0xc20801ece0, 0xc20802aac0, 0x9, 0x7ff232301940, 0xa393a8, 0x834ff0, 0xb, 0x0, 0x0, 0x0, ...)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:118 +0x3e created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:80 +0xfb8
goroutine 19 [IO wait]: net.(*pollDesc).Wait(0xc208010450, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010450, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).readFrom(0xc2080103f0, 0xc2080c8000, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x7ff2322ffbe0, 0xc20802acf8) /usr/local/go/src/net/fd_unix.go:269 +0x4a1 net.(*UDPConn).ReadFromUDP(0xc20802e068, 0xc2080c8000, 0xffff, 0xffff, 0x20, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:67 +0x124 net.(*UDPConn).ReadFrom(0xc20802e068, 0xc2080c8000, 0xffff, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:82 +0x12e
github.com/cloudfoundry/loggregatorlib/agentlistener.(*agentListener).Start(0xc208010fc0) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/loggregatorlib/agentlistener/agent_listener.go:46 +0x3a5 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:83 +0x101a
goroutine 20 [chan receive]: metron/legacy_message/legacy_unmarshaller.(*legacyUnmarshaller).Run(0xc20802a9d0, 0xc208052540, 0xc2080523c0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/legacy_message/legacy_unmarshaller/legacy_unmarshaller.go:30 +0x71 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:86 +0x1096
goroutine 21 [chan receive]: metron/legacy_message/legacy_message_converter.(*legacyMessageConverter).Run(0xc20802e020, 0xc2080523c0, 0xc208052300)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/legacy_message/legacy_message_converter/legacy_message_converter.go:28 +0x5c created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:87 +0x10dc
goroutine 22 [IO wait]: net.(*pollDesc).Wait(0xc208010530, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010530, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).readFrom(0xc2080104d0, 0xc2080e0000, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x7ff2322ffbe0, 0xc20802add0) /usr/local/go/src/net/fd_unix.go:269 +0x4a1 net.(*UDPConn).ReadFromUDP(0xc20802e098, 0xc2080e0000, 0xffff, 0xffff, 0x20, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:67 +0x124 net.(*UDPConn).ReadFrom(0xc20802e098, 0xc2080e0000, 0xffff, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/udpsock_posix.go:82 +0x12e metron/eventlistener.(*eventListener).Start(0xc208060400)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/eventlistener/event_listener.go:54 +0x3a0 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:89 +0x1108
goroutine 23 [chan receive]:
github.com/cloudfoundry/dropsonde/dropsonde_unmarshaller.(*dropsondeUnmarshaller).Run(0xc20802c080, 0xc2080525a0, 0xc208052300) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/dropsonde/dropsonde_unmarshaller/dropsonde_unmarshaller.go:65 +0x71 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:91 +0x114e
goroutine 24 [chan receive]: metron/message_aggregator.(*messageAggregator).Run(0xc208052180, 0xc208052300, 0xc208052420)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/message_aggregator/message_aggregator.go:57 +0x5c created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:94 +0x11ca
goroutine 25 [chan receive]: metron/varz_forwarder.(*VarzForwarder).Run(0xc20802c500, 0xc208052420, 0xc208052480)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/varz_forwarder/varz_forwarder.go:31 +0x51 created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:97 +0x1237
goroutine 26 [chan receive]:
github.com/cloudfoundry/dropsonde/dropsonde_marshaller.(*dropsondeMarshaller).Run(0xc20801e0a0, 0xc208052480, 0xc2080524e0) /var/vcap/data/compile/metron_agent/loggregator/src/ github.com/cloudfoundry/dropsonde/dropsonde_marshaller/dropsonde_marshaller.go:59 +0x5f created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:100 +0x12a3
goroutine 27 [chan receive]: main.signMessages(0xc20802b4b8, 0x8, 0xc2080524e0, 0xc2080526c0)
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:111 +0x7a created by main.main
/var/vcap/data/compile/metron_agent/loggregator/src/metron/main.go:103 +0x130e
goroutine 33 [syscall, locked to thread]: runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:2232 +0x1
goroutine 31 [IO wait]: net.(*pollDesc).Wait(0xc208010a00, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc208010a00, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).Read(0xc2080109a0, 0xc208059000, 0x1000, 0x1000, 0x0, 0x7ff2322ffbe0, 0xc2080daa00) /usr/local/go/src/net/fd_unix.go:242 +0x40f net.(*conn).Read(0xc20802e150, 0xc208059000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:121 +0xdc net/http.noteEOFReader.Read(0x7ff232301d08, 0xc20802e150, 0xc20805af78, 0xc208059000, 0x1000, 0x1000, 0x75b940, 0x0, 0x0) /usr/local/go/src/net/http/transport.go:1270 +0x6e net/http.(*noteEOFReader).Read(0xc20801f2e0, 0xc208059000, 0x1000, 0x1000, 0xc208012000, 0x0, 0x0) <autogenerated>:125 +0xd4 bufio.(*Reader).fill(0xc208053200) /usr/local/go/src/bufio/bufio.go:97 +0x1ce bufio.(*Reader).Peek(0xc208053200, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:132 +0xf0 net/http.(*persistConn).readLoop(0xc20805af20) /usr/local/go/src/net/http/transport.go:842 +0xa4 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:660 +0xc9f
goroutine 32 [select]: net/http.(*persistConn).writeLoop(0xc20805af20) /usr/local/go/src/net/http/transport.go:945 +0x41d created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:661 +0xcbc
goroutine 36 [IO wait]: net.(*pollDesc).Wait(0xc20810a3e0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc20810a3e0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).Read(0xc20810a380, 0xc2080fb000, 0x1000, 0x1000, 0x0, 0x7ff2322ffbe0, 0xc2080dad28) /usr/local/go/src/net/fd_unix.go:242 +0x40f net.(*conn).Read(0xc20802e318, 0xc2080fb000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:121 +0xdc net/http.noteEOFReader.Read(0x7ff232301d08, 0xc20802e318, 0xc20805ad68, 0xc2080fb000, 0x1000, 0x1000, 0x75b940, 0x0, 0x0) /usr/local/go/src/net/http/transport.go:1270 +0x6e net/http.(*noteEOFReader).Read(0xc2080dd000, 0xc2080fb000, 0x1000, 0x1000, 0xc208012000, 0x0, 0x0) <autogenerated>:125 +0xd4 bufio.(*Reader).fill(0xc2081086c0) /usr/local/go/src/bufio/bufio.go:97 +0x1ce bufio.(*Reader).Peek(0xc2081086c0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:132 +0xf0 net/http.(*persistConn).readLoop(0xc20805ad10) /usr/local/go/src/net/http/transport.go:842 +0xa4 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:660 +0xc9f
goroutine 37 [select]: net/http.(*persistConn).writeLoop(0xc20805ad10) /usr/local/go/src/net/http/transport.go:945 +0x41d created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:661 +0xcbc ----------------------
I tried to restart it using monit restart still it is failing . any body got idea on this error
regards Bharath
-- Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963
|
|
Re: Binary Service Broker Feature Narrative
Responses inline ... On Wed, Apr 6, 2016 at 12:55 PM, Mike Youngstrom <youngm(a)gmail.com> wrote: The main concerns I have is:
* Maintaining Compatibility: If a buildpack were to make a breaking change it may be complex to know that a vendor must upgrade its integration before the user can upgrade the buildpack in an environment. It seems this would require the buildpack to publish and maintain API like compatibility with whatever hooks the buildpack believes broker vendors will need to help ensure compatibility. I'm not convinced that simply telling the vendors to put a script into profile.d and/or having the buildpack execute a script during staging will be enough of an API to protect the vendor, buildpack developer, and user from the potential breakages.
To be clear, I think we're happy to test third-party agent in our CI pipelines to ensure they'll continue to work. I think this addresses your "breaking change" point in a very sustainable way. I'd like to explore whether we can meet agent requirements with profile.d and/or buildpack lifecycle hooks. If we find a compelling blocker, we can revisit this and related decisions. Hopefully we can convince you as well as agent vendors that this is a reasonable path forward. * Integration customization: One of the nice things about buildpacks is the very clear customization path. We use app dynamics and we have custom requirements for how app dynamics should be configured that app dynamics won't want to support. What would the story be for customizing the app dynamics broker's code that gets injected? It would be nice if there were a simple and consistent mechanism in place similar to what buildpacks already provide.
This isn't a use case we considered. Can you help me understand what kinds of customizations you're making? Specifics will help drive this conversation. * Services without brokers: A number of these services (especially initially) may not have official brokers installed for every customer. These customers instead tend to use user provided services. Would these customers now be required to create brokers? Not a big problem for me but I'm pretty sure that today user provided services are quite common.
We're not proposing this as a *requirement* for vendors. Clearly, vendors are free (though we discourage it) to provide forked buildpacks; and we're not planning to unilaterally remove all agents from existing buildpacks. What we *are* doing is offering ideas around a method and an API contract for vendors who may have no other options (for legal reasons), or who prefer to control their own release cycles (for commercial support reasons). This proposal is an *extension* to current methods of injecting agents, not a *replacement*; one which we feel strongly will make maintenance of buildpacks easier for the open-source team; and which will enable (and hopefully encourage) vendors to own and support their own commercial products. * Other extension requirements opportunity missed? By making this solution service broker specific are we missing an opportunity to solve a broader buildpack extension problem? Today we have users that occasionally require additional native library dependencies not available in the rootfs and not related to a service. These situations often require the user to fork the buildpack or instead look forward to docker. It seems the requirements for broker extensions and these non-broker extensions often look functionally similar. Perhaps some effort could be put toward a more general extension mechanism for buildpacks that could work for both broker and non broker use cases? Just a thought.
Effort *has* been put towards investigating more general extension mechanisms. Nothing we've invented addresses every use case -- if you have specific suggestions that we've overlooked, we're happy to discuss them, obviously. We're choosing one particular use case and addressing it here. This use case is timely, urgent, and commercially important for the ecosystem. If you think other use cases should be prioritized, then maybe we can have that conversation with Danny. I'm not sure what the best solution is. Submitting PRs into buildpacks doesn't seem like that bad of an approach to me. This is how the Linux Kernal works afterall. :) That said, I'm inclined to think that this effort could be focused on solving first the problem of binary distribution and see how things go from there.
Without going into too much painful detail, there are compelling legal, support, and staffing reasons for why PRs for commercial products aren't sustainable. Agent components may not be open-source (or OSS-compatible with APL2.0), and may not be licensed for general redistribution by Foundation vendors. We'd like to enable those companies to participate in the CF ecosystem. Under the current model, any work on new agents or changes to existing agents are blocked on the small open-source Buildpacks team, and so commercial opportunities are gated on those four or five engineers. (This isn't a hypothetical, this is actually happening right now, and is painful for everyone involved.) Providing a method for companies in the CF ecosystem to unilaterally deliver commercial functionality into the open-source core of Cloud Foundry is critical for our collective success, and the success of the platform. Here, we're proposing to provide a generic extension point which will allow allocation of engineering resources where they logically should be -- on a commercial product team, and not on an (arguably understaffed) open-source team. Certainly we agree that this proposal addresses binary redistribution. I'd like to start from that place of agreement, and learn more about the other aspects of this proposal by attempting to implement something, before further trying to make further decisions. -mike
Mike
On Wed, Apr 6, 2016 at 9:04 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:
Hi Mike,
You make a great point, and the question of "where should the responsibility live" is something we debated quite a bit, and even experimented with on a few different occasions.
You're right that, if this feature narrative is adopted, the agent broker will own the responsibility of compatibility with each buildpack (e.g., php, node, java, ruby), which is not easy to do.
But the unfortunate truth is that *somebody* has to own that code, and I don't see a compelling reason for the Buildpacks team to own and maintain code that semantically belongs to a commercial product team; especially when the commercial product team will likely be submitting PRs to the individual buildpacks in any case.
Obviously there isn't a clear "this is the best way" solution, but I'd like to understand whether there are truly compelling reasons to break apart the agent and the agent-injection code. If there's no obvious optimal path, then I'd prefer to keep the dependencies all contained within the service broker to both ease maintenance and to make it clear to whom the maintenance responsibilities belong.
On Tue, Apr 5, 2016 at 4:21 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
An interesting idea. I see the licensing of agent binaries and upgrading of agent binaries as a real problem that needs to be solved. I like the idea of the brokers providing binary agent downloads for supported platforms.
However, I'm less comfortable asking the broker to be responsible for scripting the installation of this agent for every possible buildpack. I'd feel better about keeping the agent configuration logic in the buildpack. Simply having a script run at staging or startup that sets some environment variables or something may be enough for some platforms but the integration may be tighter and more involved for other platforms. I'm inclined to think that how the agent is integrated into the buildpack should remain in the buildpack.
Thoughts?
Mike
On Tue, Apr 5, 2016 at 12:15 PM, Danny Rosen <drosen(a)pivotal.io> wrote:
Hi there, This feature narrative [1] looks to propose a new method for delivering service-agents. This new and exciting feature would enable an ecosystem of third-party developers to more easily create and maintain service-agents for usage in Cloud Foundry deployments.
[1] - https://docs.google.com/document/d/145aOpNoq7BpuB3VOzUIDh-HBx0l3v4NHLYfW8xt2zK0/edit#
|
|
What's even weirder is the GET request to /login seems to do the right thing, but POST to /oauth/token gets translated to a request for uaa.cisco.com. The error is coming back from the gorouter, so it's some weird configuration at the level of whatever is sitting in front of gorouter.
Amit
toggle quoted messageShow quoted text
On Wednesday, April 6, 2016, Sree Tummidi <stummidi(a)pivotal.io> wrote: Hi,
Can you please share your deployment manifest. There is something strange going on because for some reason UAA requests are being routed to *uaa.cisco.com <http://uaa.cisco.com/> instead of *http://uaa.vikramdevtest1.io (as shown in the output from the info endpoint) Please make sure you mask all sensitive information in the manifest.
Thanks, Sree Tummidi Sr. Product Manager Identity - Pivotal Cloud Foundry
On Tue, Apr 5, 2016 at 5:54 PM, Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco) <ngnanase(a)cisco.com <javascript:_e(%7B%7D,'cvml','ngnanase(a)cisco.com');>> wrote:
Hi
I am using cf-231 . After deploying, I can set cf endpoint. But I could not login
While logging, it gives me the following
*404 Not Found: Requested route ('uaa.cisco.com <http://uaa.cisco.com>') does not exist.*
* Server error, status code: 404, error code: , message:*
Related properties : uaa.require_htttps:false in yml
Below are the trace:
root(a)dev-inception-vm1:/opt/cisco/vms-installer/tenant-vikramdevtes t1/cf-deploy# cf login
API endpoint: https://api.vikramdevtest1.io
Email> admin
Password>
Authenticating...
Server error, status code: 404, error code: , message:
Password> root(a)dev-inception-vm1:/opt/cisco/vms-installer/tenant-vi t1/cf-deploy# ls
cf-231-final-V.yml cf-template-231.yml service.yml
cf-settings.rb cf-vikramdevtest1.yml
root(a)dev-inception-vm1:/opt/cisco/vms-installer/tenant-vikramdevtes t1/cf-deploy# vimdiff cf-template-231.yml cf-231-final-V.yml
2 files to edit
root(a)dev-inception-vm1:/opt/cisco/vms-installer/tenant-vikramdevtes t1/cf-deploy# CF_TRACE_true
CF_TRACE_true: command not found
root(a)dev-inception-vm1:/opt/cisco/vms-installer/tenant-vikramdevtes t1/cf-deploy# CF_TRACE=true cf login
API endpoint: https://api.vikramdevtest1.io
REQUEST: [2016-04-05T17:38:18Z]
GET /v2/info HTTP/1.1
Host: api.vikramdevtest1.io
Accept: application/json
Content-Type: application/json
User-Agent: go-cli 6.12.2-24abed3 / linux
RESPONSE: [2016-04-05T17:38:18Z]
HTTP/1.1 200 OK
Content-Length: 586
Content-Type: application/json;charset=utf-8
Date: Tue, 05 Apr 2016 17:38:18 GMT
Server: nginx
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: 54184ed0-310b-4a2f-5d5f-a1c21a397d49
X-Vcap-Request-Id: 876ca517-01fd-4f73-7a85-955955f3de41::86dcd763-5 f8e-42b3-b657-7af57ec9ea21
{"name":"","build":"","support":"http://support.cloudfoundry.com"," version":0,"description":"","authorization_endpoint":" http://uaa.vi kramdevtest1.io","token_endpoint":" http://uaa.vikramdevtest1.io","m in_cli_version":null,"min_recommended_cli_version":null,"api_versio n":"2.51.0","app_ssh_endpoint":"ssh.vikramdevtest1.io:2222","app_ss h_host_key_fingerprint":null,"app_ssh_oauth_client":"ssh-proxy","ro uting_endpoint":"https://api.vikramdevtest1.io/routing","logging_en dpoint":"wss://loggregator.vikramdevtest1.io:4443","doppler_logging _endpoint":"wss://doppler.vikramdevtest1.io:4443"}
REQUEST: [2016-04-05T17:38:18Z]
GET /login HTTP/1.1
Host: uaa.vikramdevtest1.io
Accept: application/json
Content-Type: application/json
User-Agent: go-cli 6.12.2-24abed3 / linux
RESPONSE: [2016-04-05T17:38:18Z]
HTTP/1.1 200 OK
Content-Length: 447
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Cache-Control: no-store
Content-Language: en-US
Content-Type: application/json;charset=UTF-8
Date: Tue, 05 Apr 2016 17:38:18 GMT
Expires: 0
Pragma: no-cache
Server: Apache-Coyote/1.1
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Vcap-Request-Id: d35da14d-3367-4032-6eef-d2050839147f
X-Xss-Protection: 1; mode=block
{"app":{"version":"3.1.0"},"links":{"uaa":"http://uaa.vikramdevtest
1.io","passwd":"https://console.vikramdevtest1.io/password_resets/n ew","login":"http://login.vikramdevtest1.io","register":"https://co nsole.vikramdevtest1.io/register"},"zone_name":"uaa","entityID":"lo gin.vikramdevtest1.io","commit_id":"9b5c13d","idpDefinitions":{},"p rompts":{"username":["text","Email"],"password":["password","Passwo rd"]},"timestamp":"2016-02-05T14:27:13+0000"}
Email> admin
Password>
Authenticating...
REQUEST: [2016-04-05T17:38:29Z]
POST /oauth/token HTTP/1.1
Host: uaa.vikramdevtest1.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/x-www-form-urlencoded
User-Agent: go-cli 6.12.2-24abed3 / linux
grant_type=password&password=[PRIVATE DATA HIDDEN]&scope=&username=admin
RESPONSE: [2016-04-05T17:38:29Z]
HTTP/1.1 404 Not Found
Content-Length: 65
Content-Type: text/plain; charset=utf-8
Date: Tue, 05 Apr 2016 17:38:29 GMT
X-Cf-Routererror: unknown_route
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: dab37d6c-3fea-428c-516a-ec7906ff6d16
404 Not Found: Requested route ('uaa.cisco.com') does not exist.
Server error, status code: 404, error code: , message:
Password> root(a)dev-inception-vm1 :/opt/cisco/vms-installer/tenant-vikramdevtest1/cf-deploy#
Regards
Nithiyasri
|
|
Hi, Can you please share your deployment manifest. There is something strange going on because for some reason UAA requests are being routed to *uaa.cisco.com < http://uaa.cisco.com/> instead of * http://uaa.vikramdevtest1.io (as shown in the output from the info endpoint) Please make sure you mask all sensitive information in the manifest. Thanks, Sree Tummidi Sr. Product Manager Identity - Pivotal Cloud Foundry On Tue, Apr 5, 2016 at 5:54 PM, Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco) <ngnanase(a)cisco.com> wrote: Hi
I am using cf-231 . After deploying, I can set cf endpoint. But I could not login
While logging, it gives me the following
*404 Not Found: Requested route ('uaa.cisco.com <http://uaa.cisco.com>') does not exist.*
* Server error, status code: 404, error code: , message:*
Related properties : uaa.require_htttps:false in yml
Below are the trace:
root(a)dev-inception-vm1:/opt/cisco/vms-installer/tenant-vikramdevtes t1/cf-deploy# cf login
API endpoint: https://api.vikramdevtest1.io
Email> admin
Password>
Authenticating...
Server error, status code: 404, error code: , message:
Password> root(a)dev-inception-vm1:/opt/cisco/vms-installer/tenant-vi t1/cf-deploy# ls
cf-231-final-V.yml cf-template-231.yml service.yml
cf-settings.rb cf-vikramdevtest1.yml
root(a)dev-inception-vm1:/opt/cisco/vms-installer/tenant-vikramdevtes t1/cf-deploy# vimdiff cf-template-231.yml cf-231-final-V.yml
2 files to edit
root(a)dev-inception-vm1:/opt/cisco/vms-installer/tenant-vikramdevtes t1/cf-deploy# CF_TRACE_true
CF_TRACE_true: command not found
root(a)dev-inception-vm1:/opt/cisco/vms-installer/tenant-vikramdevtes t1/cf-deploy# CF_TRACE=true cf login
API endpoint: https://api.vikramdevtest1.io
REQUEST: [2016-04-05T17:38:18Z]
GET /v2/info HTTP/1.1
Host: api.vikramdevtest1.io
Accept: application/json
Content-Type: application/json
User-Agent: go-cli 6.12.2-24abed3 / linux
RESPONSE: [2016-04-05T17:38:18Z]
HTTP/1.1 200 OK
Content-Length: 586
Content-Type: application/json;charset=utf-8
Date: Tue, 05 Apr 2016 17:38:18 GMT
Server: nginx
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: 54184ed0-310b-4a2f-5d5f-a1c21a397d49
X-Vcap-Request-Id: 876ca517-01fd-4f73-7a85-955955f3de41::86dcd763-5 f8e-42b3-b657-7af57ec9ea21
{"name":"","build":"","support":"http://support.cloudfoundry.com"," version":0,"description":"","authorization_endpoint":" http://uaa.vi kramdevtest1.io","token_endpoint":" http://uaa.vikramdevtest1.io","m in_cli_version":null,"min_recommended_cli_version":null,"api_versio n":"2.51.0","app_ssh_endpoint":"ssh.vikramdevtest1.io:2222","app_ss h_host_key_fingerprint":null,"app_ssh_oauth_client":"ssh-proxy","ro uting_endpoint":"https://api.vikramdevtest1.io/routing","logging_en dpoint":"wss://loggregator.vikramdevtest1.io:4443","doppler_logging _endpoint":"wss://doppler.vikramdevtest1.io:4443"}
REQUEST: [2016-04-05T17:38:18Z]
GET /login HTTP/1.1
Host: uaa.vikramdevtest1.io
Accept: application/json
Content-Type: application/json
User-Agent: go-cli 6.12.2-24abed3 / linux
RESPONSE: [2016-04-05T17:38:18Z]
HTTP/1.1 200 OK
Content-Length: 447
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Cache-Control: no-store
Content-Language: en-US
Content-Type: application/json;charset=UTF-8
Date: Tue, 05 Apr 2016 17:38:18 GMT
Expires: 0
Pragma: no-cache
Server: Apache-Coyote/1.1
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Vcap-Request-Id: d35da14d-3367-4032-6eef-d2050839147f
X-Xss-Protection: 1; mode=block
{"app":{"version":"3.1.0"},"links":{"uaa":"http://uaa.vikramdevtest 1.io ","passwd":"https://console.vikramdevtest1.io/password_resets/n ew","login":"http://login.vikramdevtest1.io","register":"https://co nsole.vikramdevtest1.io/register"},"zone_name":"uaa","entityID":"lo gin.vikramdevtest1.io","commit_id":"9b5c13d","idpDefinitions":{},"p rompts":{"username":["text","Email"],"password":["password","Passwo rd"]},"timestamp":"2016-02-05T14:27:13+0000"}
Email> admin
Password>
Authenticating...
REQUEST: [2016-04-05T17:38:29Z]
POST /oauth/token HTTP/1.1
Host: uaa.vikramdevtest1.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/x-www-form-urlencoded
User-Agent: go-cli 6.12.2-24abed3 / linux
grant_type=password&password=[PRIVATE DATA HIDDEN]&scope=&username=admin
RESPONSE: [2016-04-05T17:38:29Z]
HTTP/1.1 404 Not Found
Content-Length: 65
Content-Type: text/plain; charset=utf-8
Date: Tue, 05 Apr 2016 17:38:29 GMT
X-Cf-Routererror: unknown_route
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: dab37d6c-3fea-428c-516a-ec7906ff6d16
404 Not Found: Requested route ('uaa.cisco.com') does not exist.
Server error, status code: 404, error code: , message:
Password> root(a)dev-inception-vm1 :/opt/cisco/vms-installer/tenant-vikramdevtest1/cf-deploy#
Regards
Nithiyasri
|
|
Re: Binary Service Broker Feature Narrative
Mike Youngstrom <youngm@...>
The main concerns I have is:
* Maintaining Compatibility: If a buildpack were to make a breaking change it may be complex to know that a vendor must upgrade its integration before the user can upgrade the buildpack in an environment. It seems this would require the buildpack to publish and maintain API like compatibility with whatever hooks the buildpack believes broker vendors will need to help ensure compatibility. I'm not convinced that simply telling the vendors to put a script into profile.d and/or having the buildpack execute a script during staging will be enough of an API to protect the vendor, buildpack developer, and user from the potential breakages.
* Integration customization: One of the nice things about buildpacks is the very clear customization path. We use app dynamics and we have custom requirements for how app dynamics should be configured that app dynamics won't want to support. What would the story be for customizing the app dynamics broker's code that gets injected? It would be nice if there were a simple and consistent mechanism in place similar to what buildpacks already provide.
* Services without brokers: A number of these services (especially initially) may not have official brokers installed for every customer. These customers instead tend to use user provided services. Would these customers now be required to create brokers? Not a big problem for me but I'm pretty sure that today user provided services are quite common.
* Other extension requirements opportunity missed? By making this solution service broker specific are we missing an opportunity to solve a broader buildpack extension problem? Today we have users that occasionally require additional native library dependencies not available in the rootfs and not related to a service. These situations often require the user to fork the buildpack or instead look forward to docker. It seems the requirements for broker extensions and these non-broker extensions often look functionally similar. Perhaps some effort could be put toward a more general extension mechanism for buildpacks that could work for both broker and non broker use cases? Just a thought.
I'm not sure what the best solution is. Submitting PRs into buildpacks doesn't seem like that bad of an approach to me. This is how the Linux Kernal works afterall. :) That said, I'm inclined to think that this effort could be focused on solving first the problem of binary distribution and see how things go from there.
Mike
toggle quoted messageShow quoted text
On Wed, Apr 6, 2016 at 9:04 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote: Hi Mike,
You make a great point, and the question of "where should the responsibility live" is something we debated quite a bit, and even experimented with on a few different occasions.
You're right that, if this feature narrative is adopted, the agent broker will own the responsibility of compatibility with each buildpack (e.g., php, node, java, ruby), which is not easy to do.
But the unfortunate truth is that *somebody* has to own that code, and I don't see a compelling reason for the Buildpacks team to own and maintain code that semantically belongs to a commercial product team; especially when the commercial product team will likely be submitting PRs to the individual buildpacks in any case.
Obviously there isn't a clear "this is the best way" solution, but I'd like to understand whether there are truly compelling reasons to break apart the agent and the agent-injection code. If there's no obvious optimal path, then I'd prefer to keep the dependencies all contained within the service broker to both ease maintenance and to make it clear to whom the maintenance responsibilities belong.
On Tue, Apr 5, 2016 at 4:21 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
An interesting idea. I see the licensing of agent binaries and upgrading of agent binaries as a real problem that needs to be solved. I like the idea of the brokers providing binary agent downloads for supported platforms.
However, I'm less comfortable asking the broker to be responsible for scripting the installation of this agent for every possible buildpack. I'd feel better about keeping the agent configuration logic in the buildpack. Simply having a script run at staging or startup that sets some environment variables or something may be enough for some platforms but the integration may be tighter and more involved for other platforms. I'm inclined to think that how the agent is integrated into the buildpack should remain in the buildpack.
Thoughts?
Mike
On Tue, Apr 5, 2016 at 12:15 PM, Danny Rosen <drosen(a)pivotal.io> wrote:
Hi there, This feature narrative [1] looks to propose a new method for delivering service-agents. This new and exciting feature would enable an ecosystem of third-party developers to more easily create and maintain service-agents for usage in Cloud Foundry deployments.
[1] - https://docs.google.com/document/d/145aOpNoq7BpuB3VOzUIDh-HBx0l3v4NHLYfW8xt2zK0/edit#
|
|