Increasing warden yml network and user pool size
We are seeing some performance bottlenecks at warden, and at times warden drops all connections under increasing load. We think increasing this network and user pool_size might help. We have tried effecting those changes thorugh CF YML, but they arent getting set.
Any clues on how can we get this effective?
sudo more ./var/vcap/data/jobs/dea_next/a25eb00c949666d87c19508cc917f1601a5c5ba8-136 0a7f1564ff515d5948677293e3aa209712f4f/config/warden.yml
---
server:
unix_domain_permissions: 0777
unix_domain_path: /var/vcap/data/warden/warden.sock
container_klass: Warden::Container::Linux
container_rootfs_path: /var/vcap/packages/rootfs_lucid64
container_depot_path: /var/vcap/data/warden/depot
container_rlimits:
core: 0
pidfile: /var/vcap/sys/run/warden/warden.pid
quota:
disk_quota_enabled: true
logging:
file: /var/vcap/sys/log/warden/warden.log
level: info
syslog: vcap.warden
health_check_server:
port: 2345
network:
pool_start_address: 10.254.0.0
pool_size: 256
# Interface MTU size
# (for OpenStack use 1454 to avoid problems with rubygems with GRE tunneling)
mtu: 1400
user:
pool_start_uid: 20000
pool_size: 256
Thanks,
Animesh
|
|
Re: Installing Diego feedback
On Wed, Jul 1, 2015 at 2:46 PM Eric Malm <emalm(a)pivotal.io> wrote: Hi, Mike,
Thanks for the feedback! Responses inline below.
On Tue, Jun 30, 2015 at 5:05 PM, Mike Heath <elcapo(a)gmail.com> wrote:
I just got Diego successfully integrated and deployed in my Cloud Foundry dev environment. Here's a bit of feedback.
One of the really nice features of BOSH is that you can set a property once and any job that needs that property can consume it. Unfortunately, the Diego release takes this beautiful feature and throws it out the window. The per-job name spaced properties suck. Sure this would be easier if I were using Spiff but our existing deployments don't use Spiff. Unless Spiff is the only supported option for using the Diego BOSH release, the Diego release properties need to be fixed to avoid the mass duplication and properties that much up with properties in cf-release should be renamed. I spent more time matching up duplicate properties than anything else which is unfortunate since BOSH should have relieved me of this pain.
We intentionally decided to namespace these component properties very early on in the development of diego-release: initially everything was collapsed, as it is in cf-release, and then when we integrated against cf-release deployments and their manifests, we ended up with some property collisions, especially with etcd. Consequently, we took the opposite tack and scoped all those properties to the individual diego components to keep them decoupled. I've generally found it helpful to think of them as 'input slots' to each specific job, with the authoritative input value coming from some other source (often a cf-release property), but as you point out that can be painful and error-prone without another tool such as spiff to propagate the values. As we explore how we might reorganize parts of cf-release and diego-release into more granular releases designed for composition, and as BOSH links emerge to give us richer semantics about how to flow property information between jobs, we'll iterate on these patterns. As an immediate workaround, you could also use YAML anchors and aliases to propagate those values in your hand-crafted manifest.
So, I certainly like the idea of namespacing Diego specific properties. The job level granularity is excessive though. cf-release is also very old so a lot of its properties could be rethought/reorganized. Just warn us when you make changes. :) And yeah, I'm already using anchors and aliases all over.
SSH Proxy doesn't support 2048 bit RSA keys. I get this error:
{"timestamp":"1435189129.986424685","source":"ssh-proxy","message":"ssh-proxy.failed-to-parse-host-key","log_level":3,"data":{"error":"crypto/rsa: invalid exponents","trace":"goroutine 1 [running]:\ ngithub.com/pivotal-golang/lager.(*logger).Fatal(0xc2080640c0, 0x8eba10, 0x18, 0x7fa802383b00, 0xc20802ad80, 0x0, 0x0, 0x0)\n\t/var/vcap/packages/ssh_proxy/src/ github.com/pivotal-golang/lager/logger.go:131 +0xc8\nmain.configure(0x7fa8023886e0, 0xc2080640c0, 0x7fa8023886e0, 0x0, 0x0)\n\t/var/vcap/packages/ssh_proxy/src/ github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:167 +0xacb\nmain.main()\n\t/var/vcap/packages/ssh_proxy/src/ github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:75 +0xb4\n"}}
1024-bit keys work just fine.
The *.cc.external_port properties should have a default value (9022) just like cc.external_port does in the cloud_controller_ng job in cf-release.
In the receptor job, there's a property diego.receptor.nats.username but every other job (in cf-release and diego-release) uses nats.user rather than nats.username.
We could standardize on nats.user everywhere (the route-emitter needs these NATS properties, too, and it also currently uses nats.username). I also think it makes sense to supply that default CC port in the job specs and to make sure our spiff templates supply overrides from the cf manifest correctly. I'll add a story to straighten these out.
Rather than deploy two etcd jobs, I'm just using the etcd job provided by cf-release. Is there a reason not to do this? Everything appears to be working fine. I haven't yet run the DATs though.
I agree with Matt: these two etcd clusters will soon become operationally distinct as we secure access to Diego's internal etcd. I don't believe anything will currently collide in the keyspace, but we also can't make strong guarantees about that.
Thanks for the clarification. If anythings colliding the keyspace, I haven't found it yet. :) I'll fix my deployment.
Consul is great and all but in my dev environment the Consul server crashed a couple of times and it took a while to discover that the reason CF crapped out was was because Consul DNS lookups were broken. Is Consul a strategic solution or is it just a stop gap until BOSH Links are ready? (I would prefer removing Consul in favor of BOSH links, for the record.)
So far, Consul has provided us with a level of dynamic DNS-based service discovery beyond what it sounds like BOSH links can: for example, if one of the receptors is down for some reason, it's removed from the consul-provided DNS entries in a matter of seconds. That said, we're also exploring other options to provide that type of service discovery, such as etcd-backed SkyDNS.
Yeah, that makes sense. I suppose I'm used to everything in cf-release going through the Gorouter for automatic fail-over. Thanks for the response. Thanks, Eric _______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
we don't use 'saml' as a profile anymore. that is gone. if it exists in documentation we must remove it
toggle quoted message
Show quoted text
On Wed, Jul 1, 2015 at 3:10 PM, Filip Hanik <fhanik(a)pivotal.io> wrote: change
spring_profiles: saml
to
spring_profiles: default
On Wed, Jul 1, 2015 at 3:08 PM, Khan, Maaz <Maaz.Khan(a)emc.com> wrote:
Hi Filip,
Thanks for the links.
Here is what I did.
Checked out UAA code from git.
In resource/uaa.yml file I modified to reflect the use of SAML
spring_profiles: saml
In login.yml I have populated these entries:
saml:
entityID: https://qeadfs1.qengis.xxxxxx.com/adfs/services/trust
nameID: 'urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified'
assertionConsumerIndex: 0
signMetaData: true
signRequest: true
socket:
# URL metadata fetch - pool timeout
connectionManagerTimeout: 10000
# URL metadata fetch - read timeout
soTimeout: 10000
#BEGIN SAML PROVIDERS
providers:
openam-local:
idpMetadata: https://qeadfs1.qengis.xxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml
nameID: urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
assertionConsumerIndex: 0
signMetaData: false
signRequest: false
showSamlLoginLink: true
linkText: 'Log in with OpenAM'
Now when I run UAA locally and hit the URL http://localhost:8080/uaa/login I get this error
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'applicationProperties' defined in class path resource [spring/env.xml]: Cannot resolve reference to bean 'platformProperties' while setting bean property 'propertiesArray' with key [0]; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'platformProperties' is defined
Given that I have Entity ID – https://qeadfs1.qengis.xxxxxx.com/adfs/services/trust
And federated metadata from ADFS – : https://qeadfs1.qengis.xxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml
What will be the correct steps to integrate with ADFS?
Thanks
Maaz
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
change
spring_profiles: saml
to
spring_profiles: default
toggle quoted message
Show quoted text
On Wed, Jul 1, 2015 at 3:08 PM, Khan, Maaz <Maaz.Khan(a)emc.com> wrote: Hi Filip,
Thanks for the links.
Here is what I did.
Checked out UAA code from git.
In resource/uaa.yml file I modified to reflect the use of SAML
spring_profiles: saml
In login.yml I have populated these entries:
saml:
entityID: https://qeadfs1.qengis.xxxxxx.com/adfs/services/trust
nameID: 'urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified'
assertionConsumerIndex: 0
signMetaData: true
signRequest: true
socket:
# URL metadata fetch - pool timeout
connectionManagerTimeout: 10000
# URL metadata fetch - read timeout
soTimeout: 10000
#BEGIN SAML PROVIDERS
providers:
openam-local:
idpMetadata: https://qeadfs1.qengis.xxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml
nameID: urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
assertionConsumerIndex: 0
signMetaData: false
signRequest: false
showSamlLoginLink: true
linkText: 'Log in with OpenAM'
Now when I run UAA locally and hit the URL http://localhost:8080/uaa/login I get this error
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'applicationProperties' defined in class path resource [spring/env.xml]: Cannot resolve reference to bean 'platformProperties' while setting bean property 'propertiesArray' with key [0]; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'platformProperties' is defined
Given that I have Entity ID – https://qeadfs1.qengis.xxxxxx.com/adfs/services/trust
And federated metadata from ADFS – : https://qeadfs1.qengis.xxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml
What will be the correct steps to integrate with ADFS?
Thanks
Maaz
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Hi Filip, Thanks for the links. Here is what I did. Checked out UAA code from git. In resource/uaa.yml file I modified to reflect the use of SAML spring_profiles: saml In login.yml I have populated these entries: saml: entityID: https://qeadfs1.qengis.xxxxxx.com/adfs/services/trust nameID: 'urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified' assertionConsumerIndex: 0 signMetaData: true signRequest: true socket: # URL metadata fetch - pool timeout connectionManagerTimeout: 10000 # URL metadata fetch - read timeout soTimeout: 10000 #BEGIN SAML PROVIDERS providers: openam-local: idpMetadata: https://qeadfs1.qengis.xxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml nameID: urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress assertionConsumerIndex: 0 signMetaData: false signRequest: false showSamlLoginLink: true linkText: 'Log in with OpenAM' Now when I run UAA locally and hit the URL http://localhost:8080/uaa/login I get this error org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'applicationProperties' defined in class path resource [spring/env.xml]: Cannot resolve reference to bean 'platformProperties' while setting bean property 'propertiesArray' with key [0]; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'platformProperties' is defined Given that I have Entity ID - https://qeadfs1.qengis.xxxxxx.com/adfs/services/trustAnd federated metadata from ADFS - : https://qeadfs1.qengis.xxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xmlWhat will be the correct steps to integrate with ADFS? Thanks Maaz
|
|
Re: Can't create/update buildpacks, "a filename must be specified"
Hi Kyle,
The fundamental issue with not using Nginx is that all uploads/downloads block the cloud controller instance. A long blocking request to the CC can be a serious issue in any CF environment. As such, all instances of the CC should be deployed with Nginx enabled and activated.
Best, Zachary Auerbach, CF Runtime Team.
toggle quoted message
Show quoted text
On Wed, Jul 1, 2015 at 12:09 PM, kyle havlovitz <kylehav(a)gmail.com> wrote: What will be lost by not having nginx? I've had it disabled and haven't seen other problems before this.
On Tue, Jun 30, 2015 at 7:17 PM, CF Runtime <cfruntime(a)gmail.com> wrote:
Hi Kyle,
This component is specifically designed to work with Nginx. Despite the fact that you can successfully upload a buildpack by making a small change with Nginx disabled there are many other areas where not having Nginx will severely cripple the functionality of the Cloud Controller.
Why are you trying to deploy a CC without Nginx?
Zachary Auerbach, CF Runtime Team.
On Tue, Jun 30, 2015 at 3:21 PM, kyle havlovitz <kylehav(a)gmail.com> wrote:
I know it's recommended, but uploading buildpacks seems to just be plain broken without it (though I fixed it by changing 1 line of code in the cloud controller). The question is, is this supposed to work or is this something broken that I should make a PR for?
On Tue, Jun 30, 2015 at 5:56 PM, CF Runtime <cfruntime(a)gmail.com> wrote:
Hi Kyle,
We highly recommend using Nginx as a proxy for uploads and downloads to/from the cloud controller. Without it all long-running data transfers to the CC will block that instance of the cloud controller.
It's possible, but may have unintended and unsupported side-effects.
Best, Zachary Auerbach, CF Runtime Team.
On Tue, Jun 30, 2015 at 10:45 AM, kyle havlovitz <kylehav(a)gmail.com> wrote:
The thing is, I got it to work with use_nginx set to false just by modifying one line of code in buildpack_bits_controller.rb. Couldn't the code just be changed to support this?
On Tue, Jun 30, 2015 at 1:36 PM, Dieu Cao <dcao(a)pivotal.io> wrote:
Yes, nginx is required.
-Dieu
On Tue, Jun 30, 2015 at 3:32 PM, kyle havlovitz <kylehav(a)gmail.com> wrote:
Yes, I have nginx disabled, would that cause problems uploading a buildpack like this?
On Mon, Jun 29, 2015 at 9:18 PM, Matthew Sykes < matthew.sykes(a)gmail.com> wrote:
You may need to supply your access log from the nginx in front of cc or the cc log because when I create a new buildpack, it's working just fine:
$ CF_TRACE=true cf create-buildpack test-binary-bp ./binary_buildpack-cached-v1.0.1.zip 1 --enable
VERSION:
6.11.3-cebadc9
Creating buildpack test-binary-bp...
REQUEST: [2015-06-29T20:10:37-04:00]
POST /v2/buildpacks?async=true HTTP/1.1
Host: api.10.244.0.34.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.11.3-cebadc9 / darwin
{"name":"test-binary-bp","position":1,"enabled":true}
RESPONSE: [2015-06-29T20:10:37-04:00]
HTTP/1.1 201 Created
Content-Length: 337
Content-Type: application/json;charset=utf-8
Date: Tue, 30 Jun 2015 00:10:37 GMT
Location: /v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b
Server: nginx
X-Cf-Requestid: 49dc1a83-c37a-4311-66e5-5d2a2aea5df3
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: c7ac7b0c-9261-4b2b-7df6-d7788ba26827::168b561c-4e58-4f7c-9bf4-50ac6589522c
{
"metadata": {
"guid": "16e73f3c-3980-4603-ba07-8e5b08b78f7b",
"url": "/v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b",
"created_at": "2015-06-30T00:10:37Z",
"updated_at": null
},
"entity": {
"name": "test-binary-bp",
"position": 1,
"enabled": true,
"locked": false,
"filename": null
}
}
OK
Uploading buildpack test-binary-bp...
REQUEST: [2015-06-29T20:10:37-04:00]
PUT /v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b/bits HTTP/1.1
Host: api.10.244.0.34.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: multipart/form-data; boundary=a63345d0d8a03bcdf636aed591aa2d57acfe2e910bcc2a3835ed609c270f
User-Agent: go-cli 6.11.3-cebadc9 / darwin
[MULTIPART/FORM-DATA CONTENT HIDDEN]
Done uploading
RESPONSE: [2015-06-29T20:10:37-04:00]
HTTP/1.1 201 Created
Content-Length: 387
Content-Type: application/json;charset=utf-8
Date: Tue, 30 Jun 2015 00:10:37 GMT
Server: nginx
X-Cf-Requestid: dd6cff31-5d91-4730-6f46-cd6e085bd007
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: f5db441f-1293-429a-460a-74eb71cffaeb::c0a244bf-a50b-47d3-b2f1-cbab01a3d22a
{
"metadata": {
"guid": "16e73f3c-3980-4603-ba07-8e5b08b78f7b",
"url": "/v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b",
"created_at": "2015-06-30T00:10:37Z",
"updated_at": "2015-06-30T00:10:37Z"
},
"entity": {
"name": "test-binary-bp",
"position": 1,
"enabled": true,
"locked": false,
"filename": "binary_buildpack-cached-v1.0.1.zip"
}
}
OK
✓ $ cf buildpacks
Getting buildpacks...
buildpack position enabled locked filename
test-binary-bp 1 true false binary_buildpack-cached-v1.0.1.zip
staticfile_buildpack 2 true false staticfile_buildpack-cached-v1.2.0.zip
java_buildpack 3 true false java-buildpack-v3.0.zip
ruby_buildpack 4 true false ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 5 true false nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 6 true false go_buildpack-cached-v1.4.0.zip
python_buildpack 7 true false python_buildpack-cached-v1.4.0.zip
php_buildpack 8 true false php_buildpack-cached-v3.3.0.zip
binary_buildpack 9 true false binary_buildpack-cached-v1.0.1.zip
✓ $ cf --version
cf version 6.11.3-cebadc9-2015-05-20T18:59:33+00:00
For buildpacks, nginx handles most of the heavy lifting and then passes modified parameters to the cc for processing. The upload processor then uses the modified params to do the right thing...
Are you running a non-standard configuration that doesn't use nginx to frontend cc?
On Mon, Jun 29, 2015 at 3:22 PM, kyle havlovitz <kylehav(a)gmail.com> wrote:
After some more digging I found that it seems to be a problem in https://github.com/cloudfoundry/cloud_controller_ng/blob/master/app/controllers/runtime/buildpack_bits_controller.rb#L21 . The 'params' object here is being referenced incorrectly; it contains a key called 'buildpack' that maps to an object which has a :filename field which contains the correct buildpack filename, but the code is trying to reference params['buildpack_name'], which doesn't exist, so it throws an exception. Changing that above line to say uploaded_filename = params['buildpack'][:filename] fixed the issue for me. Could this be caused by my CLI and the cloud controller having out of sync versions? The api version on the CC is 2.23.0, and tI've been using the 6.11 CLI.
On Mon, Jun 29, 2015 at 9:31 AM, kyle havlovitz <kylehav(a)gmail.com
wrote: Here's a gist of the output I get and the command I run: https://gist.github.com/MrEnzyme/7ebd45c9c34151a52050
On Fri, Jun 26, 2015 at 10:58 PM, Matthew Sykes < matthew.sykes(a)gmail.com> wrote:
It should work since our acceptance tests validate this on every build we cut [1]. Are you running the operation as someone with a cc admin scope?
If you want to create a gist with the log (with secrets redacted) from running `cf` with CF_TRACE=true, we could certainly take a look.
[1]: https://github.com/cloudfoundry/cf-acceptance-tests/blob/cdced815f585ef4661b2182799d1d6a7119489b0/apps/app_stack_test.go#L36-L104
On Fri, Jun 26, 2015 at 2:36 PM, kyle havlovitz < kylehav(a)gmail.com> wrote:
I'm having an issue where I can't upload any buildpack to cloudfoundry; it says "The buildpack upload is invalid: a filename must be specified" and the cf_trace confirms it's sending a null value for filename. The thing is, I have specified a file name every time and get this error. I've used a few different CLI versions and uploaded different buildpacks as both zip files/directories, and nothing works. Is this a bug in the CLI/cloud controller, or am I doing something wrong?
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
-- Matthew Sykes matthew.sykes(a)gmail.com
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
-- Matthew Sykes matthew.sykes(a)gmail.com
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: Installing Diego feedback
Hi, Mike, Thanks for the feedback! Responses inline below. On Tue, Jun 30, 2015 at 5:05 PM, Mike Heath <elcapo(a)gmail.com> wrote: I just got Diego successfully integrated and deployed in my Cloud Foundry dev environment. Here's a bit of feedback.
One of the really nice features of BOSH is that you can set a property once and any job that needs that property can consume it. Unfortunately, the Diego release takes this beautiful feature and throws it out the window. The per-job name spaced properties suck. Sure this would be easier if I were using Spiff but our existing deployments don't use Spiff. Unless Spiff is the only supported option for using the Diego BOSH release, the Diego release properties need to be fixed to avoid the mass duplication and properties that much up with properties in cf-release should be renamed. I spent more time matching up duplicate properties than anything else which is unfortunate since BOSH should have relieved me of this pain.
We intentionally decided to namespace these component properties very early on in the development of diego-release: initially everything was collapsed, as it is in cf-release, and then when we integrated against cf-release deployments and their manifests, we ended up with some property collisions, especially with etcd. Consequently, we took the opposite tack and scoped all those properties to the individual diego components to keep them decoupled. I've generally found it helpful to think of them as 'input slots' to each specific job, with the authoritative input value coming from some other source (often a cf-release property), but as you point out that can be painful and error-prone without another tool such as spiff to propagate the values. As we explore how we might reorganize parts of cf-release and diego-release into more granular releases designed for composition, and as BOSH links emerge to give us richer semantics about how to flow property information between jobs, we'll iterate on these patterns. As an immediate workaround, you could also use YAML anchors and aliases to propagate those values in your hand-crafted manifest. SSH Proxy doesn't support 2048 bit RSA keys. I get this error:
{"timestamp":"1435189129.986424685","source":"ssh-proxy","message":"ssh-proxy.failed-to-parse-host-key","log_level":3,"data":{"error":"crypto/rsa: invalid exponents","trace":"goroutine 1 [running]:\ ngithub.com/pivotal-golang/lager.(*logger).Fatal(0xc2080640c0, 0x8eba10, 0x18, 0x7fa802383b00, 0xc20802ad80, 0x0, 0x0, 0x0)\n\t/var/vcap/packages/ssh_proxy/src/ github.com/pivotal-golang/lager/logger.go:131 +0xc8\nmain.configure(0x7fa8023886e0, 0xc2080640c0, 0x7fa8023886e0, 0x0, 0x0)\n\t/var/vcap/packages/ssh_proxy/src/ github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:167 +0xacb\nmain.main()\n\t/var/vcap/packages/ssh_proxy/src/ github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:75 +0xb4\n"}}
1024-bit keys work just fine.
The *.cc.external_port properties should have a default value (9022) just like cc.external_port does in the cloud_controller_ng job in cf-release.
In the receptor job, there's a property diego.receptor.nats.username but every other job (in cf-release and diego-release) uses nats.user rather than nats.username.
We could standardize on nats.user everywhere (the route-emitter needs these NATS properties, too, and it also currently uses nats.username). I also think it makes sense to supply that default CC port in the job specs and to make sure our spiff templates supply overrides from the cf manifest correctly. I'll add a story to straighten these out. Rather than deploy two etcd jobs, I'm just using the etcd job provided by cf-release. Is there a reason not to do this? Everything appears to be working fine. I haven't yet run the DATs though.
I agree with Matt: these two etcd clusters will soon become operationally distinct as we secure access to Diego's internal etcd. I don't believe anything will currently collide in the keyspace, but we also can't make strong guarantees about that. Consul is great and all but in my dev environment the Consul server crashed a couple of times and it took a while to discover that the reason CF crapped out was was because Consul DNS lookups were broken. Is Consul a strategic solution or is it just a stop gap until BOSH Links are ready? (I would prefer removing Consul in favor of BOSH links, for the record.)
So far, Consul has provided us with a level of dynamic DNS-based service discovery beyond what it sounds like BOSH links can: for example, if one of the receptors is down for some reason, it's removed from the consul-provided DNS entries in a matter of seconds. That said, we're also exploring other options to provide that type of service discovery, such as etcd-backed SkyDNS. Thanks, Eric
|
|
Re: Can't create/update buildpacks, "a filename must be specified"
kyle havlovitz <kylehav@...>
What will be lost by not having nginx? I've had it disabled and haven't seen other problems before this.
toggle quoted message
Show quoted text
On Tue, Jun 30, 2015 at 7:17 PM, CF Runtime <cfruntime(a)gmail.com> wrote: Hi Kyle,
This component is specifically designed to work with Nginx. Despite the fact that you can successfully upload a buildpack by making a small change with Nginx disabled there are many other areas where not having Nginx will severely cripple the functionality of the Cloud Controller.
Why are you trying to deploy a CC without Nginx?
Zachary Auerbach, CF Runtime Team.
On Tue, Jun 30, 2015 at 3:21 PM, kyle havlovitz <kylehav(a)gmail.com> wrote:
I know it's recommended, but uploading buildpacks seems to just be plain broken without it (though I fixed it by changing 1 line of code in the cloud controller). The question is, is this supposed to work or is this something broken that I should make a PR for?
On Tue, Jun 30, 2015 at 5:56 PM, CF Runtime <cfruntime(a)gmail.com> wrote:
Hi Kyle,
We highly recommend using Nginx as a proxy for uploads and downloads to/from the cloud controller. Without it all long-running data transfers to the CC will block that instance of the cloud controller.
It's possible, but may have unintended and unsupported side-effects.
Best, Zachary Auerbach, CF Runtime Team.
On Tue, Jun 30, 2015 at 10:45 AM, kyle havlovitz <kylehav(a)gmail.com> wrote:
The thing is, I got it to work with use_nginx set to false just by modifying one line of code in buildpack_bits_controller.rb. Couldn't the code just be changed to support this?
On Tue, Jun 30, 2015 at 1:36 PM, Dieu Cao <dcao(a)pivotal.io> wrote:
Yes, nginx is required.
-Dieu
On Tue, Jun 30, 2015 at 3:32 PM, kyle havlovitz <kylehav(a)gmail.com> wrote:
Yes, I have nginx disabled, would that cause problems uploading a buildpack like this?
On Mon, Jun 29, 2015 at 9:18 PM, Matthew Sykes < matthew.sykes(a)gmail.com> wrote:
You may need to supply your access log from the nginx in front of cc or the cc log because when I create a new buildpack, it's working just fine:
$ CF_TRACE=true cf create-buildpack test-binary-bp ./binary_buildpack-cached-v1.0.1.zip 1 --enable
VERSION:
6.11.3-cebadc9
Creating buildpack test-binary-bp...
REQUEST: [2015-06-29T20:10:37-04:00]
POST /v2/buildpacks?async=true HTTP/1.1
Host: api.10.244.0.34.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.11.3-cebadc9 / darwin
{"name":"test-binary-bp","position":1,"enabled":true}
RESPONSE: [2015-06-29T20:10:37-04:00]
HTTP/1.1 201 Created
Content-Length: 337
Content-Type: application/json;charset=utf-8
Date: Tue, 30 Jun 2015 00:10:37 GMT
Location: /v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b
Server: nginx
X-Cf-Requestid: 49dc1a83-c37a-4311-66e5-5d2a2aea5df3
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: c7ac7b0c-9261-4b2b-7df6-d7788ba26827::168b561c-4e58-4f7c-9bf4-50ac6589522c
{
"metadata": {
"guid": "16e73f3c-3980-4603-ba07-8e5b08b78f7b",
"url": "/v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b",
"created_at": "2015-06-30T00:10:37Z",
"updated_at": null
},
"entity": {
"name": "test-binary-bp",
"position": 1,
"enabled": true,
"locked": false,
"filename": null
}
}
OK
Uploading buildpack test-binary-bp...
REQUEST: [2015-06-29T20:10:37-04:00]
PUT /v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b/bits HTTP/1.1
Host: api.10.244.0.34.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: multipart/form-data; boundary=a63345d0d8a03bcdf636aed591aa2d57acfe2e910bcc2a3835ed609c270f
User-Agent: go-cli 6.11.3-cebadc9 / darwin
[MULTIPART/FORM-DATA CONTENT HIDDEN]
Done uploading
RESPONSE: [2015-06-29T20:10:37-04:00]
HTTP/1.1 201 Created
Content-Length: 387
Content-Type: application/json;charset=utf-8
Date: Tue, 30 Jun 2015 00:10:37 GMT
Server: nginx
X-Cf-Requestid: dd6cff31-5d91-4730-6f46-cd6e085bd007
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: f5db441f-1293-429a-460a-74eb71cffaeb::c0a244bf-a50b-47d3-b2f1-cbab01a3d22a
{
"metadata": {
"guid": "16e73f3c-3980-4603-ba07-8e5b08b78f7b",
"url": "/v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b",
"created_at": "2015-06-30T00:10:37Z",
"updated_at": "2015-06-30T00:10:37Z"
},
"entity": {
"name": "test-binary-bp",
"position": 1,
"enabled": true,
"locked": false,
"filename": "binary_buildpack-cached-v1.0.1.zip"
}
}
OK
✓ $ cf buildpacks
Getting buildpacks...
buildpack position enabled locked filename
test-binary-bp 1 true false binary_buildpack-cached-v1.0.1.zip
staticfile_buildpack 2 true false staticfile_buildpack-cached-v1.2.0.zip
java_buildpack 3 true false java-buildpack-v3.0.zip
ruby_buildpack 4 true false ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 5 true false nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 6 true false go_buildpack-cached-v1.4.0.zip
python_buildpack 7 true false python_buildpack-cached-v1.4.0.zip
php_buildpack 8 true false php_buildpack-cached-v3.3.0.zip
binary_buildpack 9 true false binary_buildpack-cached-v1.0.1.zip
✓ $ cf --version
cf version 6.11.3-cebadc9-2015-05-20T18:59:33+00:00
For buildpacks, nginx handles most of the heavy lifting and then passes modified parameters to the cc for processing. The upload processor then uses the modified params to do the right thing...
Are you running a non-standard configuration that doesn't use nginx to frontend cc?
On Mon, Jun 29, 2015 at 3:22 PM, kyle havlovitz <kylehav(a)gmail.com> wrote:
After some more digging I found that it seems to be a problem in https://github.com/cloudfoundry/cloud_controller_ng/blob/master/app/controllers/runtime/buildpack_bits_controller.rb#L21 . The 'params' object here is being referenced incorrectly; it contains a key called 'buildpack' that maps to an object which has a :filename field which contains the correct buildpack filename, but the code is trying to reference params['buildpack_name'], which doesn't exist, so it throws an exception. Changing that above line to say uploaded_filename = params['buildpack'][:filename] fixed the issue for me. Could this be caused by my CLI and the cloud controller having out of sync versions? The api version on the CC is 2.23.0, and tI've been using the 6.11 CLI.
On Mon, Jun 29, 2015 at 9:31 AM, kyle havlovitz <kylehav(a)gmail.com> wrote:
Here's a gist of the output I get and the command I run: https://gist.github.com/MrEnzyme/7ebd45c9c34151a52050
On Fri, Jun 26, 2015 at 10:58 PM, Matthew Sykes < matthew.sykes(a)gmail.com> wrote:
It should work since our acceptance tests validate this on every build we cut [1]. Are you running the operation as someone with a cc admin scope?
If you want to create a gist with the log (with secrets redacted) from running `cf` with CF_TRACE=true, we could certainly take a look.
[1]: https://github.com/cloudfoundry/cf-acceptance-tests/blob/cdced815f585ef4661b2182799d1d6a7119489b0/apps/app_stack_test.go#L36-L104
On Fri, Jun 26, 2015 at 2:36 PM, kyle havlovitz < kylehav(a)gmail.com> wrote:
I'm having an issue where I can't upload any buildpack to cloudfoundry; it says "The buildpack upload is invalid: a filename must be specified" and the cf_trace confirms it's sending a null value for filename. The thing is, I have specified a file name every time and get this error. I've used a few different CLI versions and uploaded different buildpacks as both zip files/directories, and nothing works. Is this a bug in the CLI/cloud controller, or am I doing something wrong?
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
-- Matthew Sykes matthew.sykes(a)gmail.com
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
-- Matthew Sykes matthew.sykes(a)gmail.com
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: Installing Diego feedback
I have created and used quite a few BOSH releases and the property namespacing feels very odd to me. I'm very curious to understand the reasoning behind it. I can't reproduce my 2048-bit key problem. It must have been some odd fluke on my end. Thanks for the response! -Mike On Tue, Jun 30, 2015 at 8:37 PM Matthew Sykes <matthew.sykes(a)gmail.com> wrote: Thanks for the feedback. I'll let others comment on the bosh aspects other than to say that we are expecting people to use spiff to generate the manifests and that the decision to namespace properties was intentional.
For the SSH proxy, it absolutely does support 2048 bit RSA keys so I'm not sure why you ran into a problem. Our bosh-lite template uses a 2014 bit key and we have tests that use 1024 and 2048 bit keys in CI. If you want to dig into that, please open an issue.
As for consul, it's TBD whether or not it becomes a strategic solution but it offers capabilities above and beyond bosh links. We kicked off some work today to look at recreating the health checks and dns resolution with a sky dns + etcd solution. If that looks promising, we'll probably go in that direction.
On the etcd side, it's probably best not to share the two for now. Diego is in the process of enabling mutual auth over SSL - something that probably won't be done in cf-release any time soon.
On Tue, Jun 30, 2015 at 8:05 PM, Mike Heath <elcapo(a)gmail.com> wrote:
I just got Diego successfully integrated and deployed in my Cloud Foundry dev environment. Here's a bit of feedback.
One of the really nice features of BOSH is that you can set a property once and any job that needs that property can consume it. Unfortunately, the Diego release takes this beautiful feature and throws it out the window. The per-job name spaced properties suck. Sure this would be easier if I were using Spiff but our existing deployments don't use Spiff. Unless Spiff is the only supported option for using the Diego BOSH release, the Diego release properties need to be fixed to avoid the mass duplication and properties that much up with properties in cf-release should be renamed. I spent more time matching up duplicate properties than anything else which is unfortunate since BOSH should have relieved me of this pain.
SSH Proxy doesn't support 2048 bit RSA keys. I get this error:
{"timestamp":"1435189129.986424685","source":"ssh-proxy","message":"ssh-proxy.failed-to-parse-host-key","log_level":3,"data":{"error":"crypto/rsa: invalid exponents","trace":"goroutine 1 [running]:\ ngithub.com/pivotal-golang/lager.(*logger).Fatal(0xc2080640c0, 0x8eba10, 0x18, 0x7fa802383b00, 0xc20802ad80, 0x0, 0x0, 0x0)\n\t/var/vcap/packages/ssh_proxy/src/ github.com/pivotal-golang/lager/logger.go:131 +0xc8\nmain.configure(0x7fa8023886e0, 0xc2080640c0, 0x7fa8023886e0, 0x0, 0x0)\n\t/var/vcap/packages/ssh_proxy/src/ github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:167 +0xacb\nmain.main()\n\t/var/vcap/packages/ssh_proxy/src/ github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:75 +0xb4\n"}}
1024-bit keys work just fine.
The *.cc.external_port properties should have a default value (9022) just like cc.external_port does in the cloud_controller_ng job in cf-release.
In the receptor job, there's a property diego.receptor.nats.username but every other job (in cf-release and diego-release) uses nats.user rather than nats.username.
Rather than deploy two etcd jobs, I'm just using the etcd job provided by cf-release. Is there a reason not to do this? Everything appears to be working fine. I haven't yet run the DATs though.
Consul is great and all but in my dev environment the Consul server crashed a couple of times and it took a while to discover that the reason CF crapped out was was because Consul DNS lookups were broken. Is Consul a strategic solution or is it just a stop gap until BOSH Links are ready? (I would prefer removing Consul in favor of BOSH links, for the record.)
-Mike
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
-- Matthew Sykes matthew.sykes(a)gmail.com _______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: How to update blobs in blob.cfblob.com ?
Matthew Sykes <matthew.sykes@...>
Since you won't be able to upload the blobs to the cf-release bucket, I'd suggest you capture the output of `bosh blobs` in your pull request. That command should enumerate all of the new blobs and their sizes.
For each entry that's there, point to a publicly available URL and a hash that can be used to verify it.
When the PR is reviewed, if things look good, the pair will likely pull the blobs down to evaluate them and test the overall function.
toggle quoted message
Show quoted text
-- Matthew Sykes matthew.sykes(a)gmail.com
|
|
loggregator tc repeating error "websocket: close 1005"
Gianluca Volpe <gvolpe1968@...>
hi Erik
any news on this?
thx for your help
G
|
|
Loggregator Runtime error: invalid memory address or nil pointer dereference
Gianluca Volpe <gvolpe1968@...>
hi
does anyone faced the same issue ?
it seems to be quite important
thx
---------- Forwarded message ---------- From: Gianluca Volpe <gvolpe1968(a)gmail.com> Date: 2015-06-23 12:30 GMT+02:00 Subject: Loggregator Runtime error: invalid memory address or nil pointer dereference To: cf-dev(a)lists.cloudfoundry.org
The /var/vcap/sys/log/loggregator_trafficcontroller/loggregator_trafficcontroller.stderr.log file has frequent occurrences of this type of snippet
2015/06/16 18:16:13 http: panic serving <routerIP>:<port>: runtime error: invalid memory address or nil pointer dereference goroutine 1614338 [running]: net/http.func·011() /usr/local/go/src/pkg/net/http/server.go:1100 +0xb7 runtime.panic(0x7c1a40, 0xa4a993) /usr/local/go/src/pkg/runtime/panic.c:248 +0x18d github.com/cloudfoundry/loggregatorlib/server/handlers.(*httpHandler).ServeHTTP(0xc209a61e30, 0x7f5171a3b288, 0xc20abf5ae0, 0xc20b12bad0)
/var/vcap/data/compile/loggregator_trafficcontroller/loggregator/src/ github.com/cloudfoundry/loggregatorlib/server/handlers/http_handler.go:29 +0x2f5 trafficcontroller/dopplerproxy.(*Proxy).serveWithDoppler(0xc2080ac0d0, 0x7f5171a3b288, 0xc20abf5ae0, 0xc20b12bad0, 0xc20aaacd6b, 0xa, 0xc20aaacd46, 0x24, 0x0, 0x12a05f200, ...)
/var/vcap/data/compile/loggregator_trafficcontroller/loggregator/src/trafficcontroller/dopplerproxy/doppler_proxy.go:181 +0x152 trafficcontroller/dopplerproxy.(*Proxy).serveAppLogs(0xc2080ac0d0, 0x7f5171a3b288, 0xc20abf5ae0, 0xc20b12bad0)
/var/vcap/data/compile/loggregator_trafficcontroller/loggregator/src/trafficcontroller/dopplerproxy/doppler_proxy.go:170 +0x849 trafficcontroller/dopplerproxy.(*Proxy).ServeHTTP(0xc2080ac0d0, 0x7f5171a3b288, 0xc20abf5ae0, 0xc20b12b930)
/var/vcap/data/compile/loggregator_trafficcontroller/loggregator/src/trafficcontroller/dopplerproxy/doppler_proxy.go:86 +0x4d9 net/http.serverHandler.ServeHTTP(0xc2080046c0, 0x7f5171a3b288, 0xc20abf5ae0, 0xc20b12b930) /usr/local/go/src/pkg/net/http/server.go:1673 +0x19f net/http.(*conn).serve(0xc20a8c6f00) /usr/local/go/src/pkg/net/http/server.go:1174 +0xa7e created by net/http.(*Server).Serve /usr/local/go/src/pkg/net/http/server.go:1721 +0x313 2015/06/22 11:28:58 http: panic serving <routerIP>:<port>: runtime error: invalid memory address or nil pointer dereference goroutine 2034572 [running]: net/http.func·011() /usr/local/go/src/pkg/net/http/server.go:1100 +0xb7 runtime.panic(0x7c1a40, 0xa4a993) /usr/local/go/src/pkg/runtime/panic.c:248 +0x18d github.com/cloudfoundry/loggregatorlib/server/handlers.(*httpHandler).ServeHTTP(0xc20bf2d820, 0x7f5171a3b288, 0xc20d33a000, 0xc20c8c9c70)
/var/vcap/data/compile/loggregator_trafficcontroller/loggregator/src/ github.com/cloudfoundry/loggregatorlib/server/handlers/http_handler.go:29 +0x2f5 trafficcontroller/dopplerproxy.(*Proxy).serveWithDoppler(0xc2080ac0d0, 0x7f5171a3b288, 0xc20d33a000, 0xc20c8c9c70, 0xc20c96e1eb, 0xa, 0xc20c96e1c6, 0x24, 0x0, 0x12a05f200, ...)
/var/vcap/data/compile/loggregator_trafficcontroller/loggregator/src/trafficcontroller/dopplerproxy/doppler_proxy.go:181 +0x152 trafficcontroller/dopplerproxy.(*Proxy).serveAppLogs(0xc2080ac0d0, 0x7f5171a3b288, 0xc20d33a000, 0xc20c8c9c70)
/var/vcap/data/compile/loggregator_trafficcontroller/loggregator/src/trafficcontroller/dopplerproxy/doppler_proxy.go:170 +0x849 trafficcontroller/dopplerproxy.(*Proxy).ServeHTTP(0xc2080ac0d0, 0x7f5171a3b288, 0xc20d33a000, 0xc20c8c9ad0)
/var/vcap/data/compile/loggregator_trafficcontroller/loggregator/src/trafficcontroller/dopplerproxy/doppler_proxy.go:86 +0x4d9 net/http.serverHandler.ServeHTTP(0xc2080046c0, 0x7f5171a3b288, 0xc20d33a000, 0xc20c8c9ad0) /usr/local/go/src/pkg/net/http/server.go:1673 +0x19f net/http.(*conn).serve(0xc20c9cc680) /usr/local/go/src/pkg/net/http/server.go:1174 +0xa7e created by net/http.(*Server).Serve /usr/local/go/src/pkg/net/http/server.go:1721 +0x313
thx GV
|
|
Re: private vs public visibility of apps
Matthias Ender <Matthias.Ender@...>
excellent.
I can create domain to the private proxy that's included
cf create-shared-domain apps.<ip of private_haproxy>.xip.io
and then push my app to it
cf push -d apps.<ip of private_haproxy>.xip.io myapp
|
|
Re: Cloud Foundry Group on Slack
Simon Johansson <simon@...>
There is some IRC channels of freenode. #bosh, #cloudfoundry On Wed, Jul 1, 2015 at 12:02 PM, Pravin Mishra <pravinmishra88(a)gmail.com> wrote: Hello All,
Do we have any slack group for CloudFoundry developer?
Best Regards, Pravin Mishra
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
How to update blobs in blob.cfblob.com ?
Alexander Lomov <alexander.lomov@...>
|
|
Cloud Foundry Group on Slack
Pravin Mishra <pravinmishra88@...>
Hello All,
Do we have any slack group for CloudFoundry developer?
Best Regards, Pravin Mishra
|
|
Re: CF rollback issue from v210 to v202
Thanks Ning,
Even we have tried the same approach (deleted unwanted file entries and kept v202 migration related files in schema_migration table) to get back to v202. but after that we were in v202 and not able to upgrade to v210. since manually we have deleted those entries in ccdb some what again api_z1/0 is not started up. because of that cloud_controller_ng monit process not started properly.
So even we have tried ccdb snapshot restore option from AWS RDS as well no luck on it.
after downgrading version v202 / other version by any chance will you able to upgrade to higher version without any issues ??
Regards Lingesh M
toggle quoted message
Show quoted text
On Tue, Jun 30, 2015 at 9:24 AM, Ning Fu <nfu(a)pivotal.io> wrote: We encountered the same problem today, and the solution is to delete the records of those files from a table(schema_migrations) in ccdb.
The files are located under cloud_controller_ng/db/migrations/. But it seems ccdb is used as a file name reference.
Regards, Ning
On Tue, Jun 30, 2015 at 8:51 PM, Lingesh Mouleeshwaran < lingeshmouleeshwaran(a)gmail.com> wrote:
Thanks a lot James. we will try it out.
On Tue, Jun 30, 2015 at 12:54 AM, James Bayer <jbayer(a)pivotal.io> wrote:
if you backup the databases before the upgrade, then you could restore the databases before the rollback deployment. we don't ever rollback at pivotal, we roll forward with fixes. i recommend testing upgrades in a test environment to gain confidence. rolling back would be an absolute worst case.
On Mon, Jun 29, 2015 at 4:18 PM, Lingesh Mouleeshwaran < lingeshmouleeshwaran(a)gmail.com> wrote:
Hi James,
Thanks a lot , Please could you tell us what is the clean way of doing rollback from v210 to v202.
Regards Lingesh M
On Mon, Jun 29, 2015 at 5:58 PM, James Bayer <jbayer(a)pivotal.io> wrote:
when you upgrade to a newer version of cf-release, it performs database migrations. the message is likely telling you that cf-release v202 code in the cloud controller is not compatible with the db migrations that were performed when upgrading to cf-release v210.
On Mon, Jun 29, 2015 at 2:53 PM, Lingesh Mouleeshwaran < lingeshmouleeshwaran(a)gmail.com> wrote:
Hello Team ,
we are able to upgrade cf 202 to v210 in development environment , incase of any unknown issue we may want to rollback to 202. So we are trying to rollback from 210 to 202. But bosh not able to complete the rollback successfully. we are getting below error from bosh.
Error :
Started updating job api_z1 Started updating job api_z1 > api_z1/0 (canary). Failed: `api_z1/0' is not running after update (00:14:53)
Error 400007: `api_z1/0' is not running after update
even we are able to ssh on api_z1 successfully. but found below issue in cloud_controller_ng .
monit summary The Monit daemon 5.2.4 uptime: 13m
Process 'cloud_controller_ng' Execution failed Process 'cloud_controller_worker_local_1' not monitored Process 'cloud_controller_worker_local_2' not monitored Process 'nginx_cc' not monitored Process 'metron_agent' running Process 'check_mk' running System 'system_6e1e4d43-f677-4dc6-ab8a-5b6152504918' running
logs from : /var/vcap/sys/log/cloud_controller_ng_ctl.err.log
[2015-06-29 21:18:55+0000] Tasks: TOP => db:migrate [2015-06-29 21:18:55+0000] (See full trace by running task with --trace) [2015-06-29 21:19:39+0000] ------------ STARTING cloud_controller_ng_ctl at Mon Jun 29 21:19:36 UTC 2015 -------------- [2015-06-29 21:19:39+0000] rake aborted! [2015-06-29 21:19:39+0000] Sequel::Migrator::Error: Applied migration files not in file system: 20150306233007_increase_size_of_delayed_job_handler.rb, 20150311204445_add_desired_state_to_v3_apps.rb, 20150313233039_create_apps_v3_routes.rb, 20150316184259_create_service_key_table.rb, 20150318185941_add_encrypted_environment_variables_to_apps_v3.rb, 20150319150641_add_encrypted_environment_variables_to_v3_droplets.rb, 20150323170053_change_service_instance_description_to_text.rb, 20150323234355_recreate_apps_v3_routes.rb, 20150324232809_add_fk_v3_apps_packages_droplets_processes.rb, 20150325224808_add_v3_attrs_to_app_usage_events.rb, 20150327080540_add_cached_docker_image_to_droplets.rb, 20150403175058_add_index_to_droplets_droplet_hash.rb, 20150403190653_add_procfile_to_droplets.rb, 20150407213536_add_index_to_stack_id.rb, 20150421190248_add_allow_ssh_to_app.rb, 20150422000255_route_path_field.rb, 20150430214950_add_allow_ssh_to_spaces.rb, 20150501181106_rename_apps_allow_ssh_to_enable_ssh.rb, 20150514190458_fix_mysql_collations.rb, 20150515230939_add_case_insensitive_to_route_path.rb cloud_controller_ng_ctl.err.log
Please any idea is some thing problem with rollback scripts during rollback ??.
Regards Lingesh M
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
-- Thank you,
James Bayer
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
-- Thank you,
James Bayer
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: Installing Diego feedback
Matthew Sykes <matthew.sykes@...>
Thanks for the feedback. I'll let others comment on the bosh aspects other than to say that we are expecting people to use spiff to generate the manifests and that the decision to namespace properties was intentional.
For the SSH proxy, it absolutely does support 2048 bit RSA keys so I'm not sure why you ran into a problem. Our bosh-lite template uses a 2014 bit key and we have tests that use 1024 and 2048 bit keys in CI. If you want to dig into that, please open an issue.
As for consul, it's TBD whether or not it becomes a strategic solution but it offers capabilities above and beyond bosh links. We kicked off some work today to look at recreating the health checks and dns resolution with a sky dns + etcd solution. If that looks promising, we'll probably go in that direction.
On the etcd side, it's probably best not to share the two for now. Diego is in the process of enabling mutual auth over SSL - something that probably won't be done in cf-release any time soon.
toggle quoted message
Show quoted text
On Tue, Jun 30, 2015 at 8:05 PM, Mike Heath <elcapo(a)gmail.com> wrote: I just got Diego successfully integrated and deployed in my Cloud Foundry dev environment. Here's a bit of feedback.
One of the really nice features of BOSH is that you can set a property once and any job that needs that property can consume it. Unfortunately, the Diego release takes this beautiful feature and throws it out the window. The per-job name spaced properties suck. Sure this would be easier if I were using Spiff but our existing deployments don't use Spiff. Unless Spiff is the only supported option for using the Diego BOSH release, the Diego release properties need to be fixed to avoid the mass duplication and properties that much up with properties in cf-release should be renamed. I spent more time matching up duplicate properties than anything else which is unfortunate since BOSH should have relieved me of this pain.
SSH Proxy doesn't support 2048 bit RSA keys. I get this error:
{"timestamp":"1435189129.986424685","source":"ssh-proxy","message":"ssh-proxy.failed-to-parse-host-key","log_level":3,"data":{"error":"crypto/rsa: invalid exponents","trace":"goroutine 1 [running]:\ ngithub.com/pivotal-golang/lager.(*logger).Fatal(0xc2080640c0, 0x8eba10, 0x18, 0x7fa802383b00, 0xc20802ad80, 0x0, 0x0, 0x0)\n\t/var/vcap/packages/ssh_proxy/src/ github.com/pivotal-golang/lager/logger.go:131 +0xc8\nmain.configure(0x7fa8023886e0, 0xc2080640c0, 0x7fa8023886e0, 0x0, 0x0)\n\t/var/vcap/packages/ssh_proxy/src/ github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:167 +0xacb\nmain.main()\n\t/var/vcap/packages/ssh_proxy/src/ github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:75 +0xb4\n"}}
1024-bit keys work just fine.
The *.cc.external_port properties should have a default value (9022) just like cc.external_port does in the cloud_controller_ng job in cf-release.
In the receptor job, there's a property diego.receptor.nats.username but every other job (in cf-release and diego-release) uses nats.user rather than nats.username.
Rather than deploy two etcd jobs, I'm just using the etcd job provided by cf-release. Is there a reason not to do this? Everything appears to be working fine. I haven't yet run the DATs though.
Consul is great and all but in my dev environment the Consul server crashed a couple of times and it took a while to discover that the reason CF crapped out was was because Consul DNS lookups were broken. Is Consul a strategic solution or is it just a stop gap until BOSH Links are ready? (I would prefer removing Consul in favor of BOSH links, for the record.)
-Mike
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
-- Matthew Sykes matthew.sykes(a)gmail.com
|
|
toggle quoted message
Show quoted text
On Tue, Jun 30, 2015 at 6:02 PM, Maaz Khan <maazkhansgsits(a)gmail.com> wrote: Hello,
We want to integrate UAA with our ADFS for authentication purpose. Is there a walk through on how to do it.
I read that UAA supports SAML and LDAP. There are bunch of information regarding LDAP and UAA integration but I couldnt find much info regarding SAML.
Can someone please provide some pointers on how one can go about integrating ADFS or SAML configuration with UAA.
Thanks Maaz
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Installing Diego feedback
I just got Diego successfully integrated and deployed in my Cloud Foundry dev environment. Here's a bit of feedback.
One of the really nice features of BOSH is that you can set a property once and any job that needs that property can consume it. Unfortunately, the Diego release takes this beautiful feature and throws it out the window. The per-job name spaced properties suck. Sure this would be easier if I were using Spiff but our existing deployments don't use Spiff. Unless Spiff is the only supported option for using the Diego BOSH release, the Diego release properties need to be fixed to avoid the mass duplication and properties that much up with properties in cf-release should be renamed. I spent more time matching up duplicate properties than anything else which is unfortunate since BOSH should have relieved me of this pain.
SSH Proxy doesn't support 2048 bit RSA keys. I get this error:
{"timestamp":"1435189129.986424685","source":"ssh-proxy","message":"ssh-proxy.failed-to-parse-host-key","log_level":3,"data":{"error":"crypto/rsa: invalid exponents","trace":"goroutine 1 [running]:\ ngithub.com/pivotal-golang/lager.(*logger).Fatal(0xc2080640c0, 0x8eba10, 0x18, 0x7fa802383b00, 0xc20802ad80, 0x0, 0x0, 0x0)\n\t/var/vcap/packages/ssh_proxy/src/ github.com/pivotal-golang/lager/logger.go:131 +0xc8\nmain.configure(0x7fa8023886e0, 0xc2080640c0, 0x7fa8023886e0, 0x0, 0x0)\n\t/var/vcap/packages/ssh_proxy/src/ github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:167 +0xacb\nmain.main()\n\t/var/vcap/packages/ssh_proxy/src/ github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:75 +0xb4\n"}}
1024-bit keys work just fine.
The *.cc.external_port properties should have a default value (9022) just like cc.external_port does in the cloud_controller_ng job in cf-release.
In the receptor job, there's a property diego.receptor.nats.username but every other job (in cf-release and diego-release) uses nats.user rather than nats.username.
Rather than deploy two etcd jobs, I'm just using the etcd job provided by cf-release. Is there a reason not to do this? Everything appears to be working fine. I haven't yet run the DATs though.
Consul is great and all but in my dev environment the Consul server crashed a couple of times and it took a while to discover that the reason CF crapped out was was because Consul DNS lookups were broken. Is Consul a strategic solution or is it just a stop gap until BOSH Links are ready? (I would prefer removing Consul in favor of BOSH links, for the record.)
-Mike
|
|