Date   

loggregator tc repeating error "websocket: close 1005"

Gianluca Volpe <gvolpe1968@...>
 

hi Erik

did you have the chance to open the chore on this case?

if yes, are there any news about?

thx for your help

Gianluca


cf restage and downtime

Peter Dotchev <dotchev@...>
 

Hello,

We use some user-provided services to provide some configurations to our
apps.
Sometime we need to update the configurations but without downtime.
So we update these configs using cf update-user-provided-service command
and then we have to restage bound apps.
I have noticed that cf restage causes no downtime as it first starts the
new instances and only then stops the old ones.
This is ok for us, but I could not find any documentation of this
behaviour. Can we rely on this to stay like that in the future or would it
be better to use blue-green deployment (more manual work)?

Best regards,
Petar


Re: CF rollback issue from v210 to v202

Lingesh Mouleeshwaran
 

Hi Joseph & Zak,

actually we have deleted the bosh deployment as well as dropped the uaa and
cc db;s.. Then did the fresh 202 deployment and top of that we have moved
to v210 successfully. As per James suggestion we also decided to go forward
track further. currently I don't have the complete log stack with me but I
knew the compliant was about ccdb related stuff only from bosh.Le me see if
I can pull those logs from bosh director.

Regards
Lingesh M

On Thu, Jul 2, 2015 at 7:56 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

Simply deleting the schema_migrations table entries may allow the api to
start, but the actual changes from those migrations will need to be rolled
back manually too.

Best thing is certainly to backup and restore the database if you
rollback. Could you give more details on why that didn't work? You may also
need to backup and restore the uaadb database.

Joseph & Zak
CF Runtime Team

On Tue, Jun 30, 2015 at 8:53 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Thanks Ning,

Even we have tried the same approach (deleted unwanted file entries and
kept v202 migration related files in schema_migration table) to get back
to v202. but after that we were in v202 and not able to upgrade to v210.
since manually we have deleted those entries in ccdb some what again
api_z1/0 is not started up. because of that cloud_controller_ng monit
process not started properly.

So even we have tried ccdb snapshot restore option from AWS RDS as well
no luck on it.

after downgrading version v202 / other version by any chance will you
able to upgrade to higher version without any issues ??

Regards
Lingesh M

On Tue, Jun 30, 2015 at 9:24 AM, Ning Fu <nfu(a)pivotal.io> wrote:

We encountered the same problem today, and the solution is to delete the
records of those files from a table(schema_migrations) in ccdb.

The files are located under cloud_controller_ng/db/migrations/. But it
seems ccdb is used as a file name reference.

Regards,
Ning

On Tue, Jun 30, 2015 at 8:51 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Thanks a lot James. we will try it out.

On Tue, Jun 30, 2015 at 12:54 AM, James Bayer <jbayer(a)pivotal.io>
wrote:

if you backup the databases before the upgrade, then you could restore
the databases before the rollback deployment. we don't ever rollback at
pivotal, we roll forward with fixes. i recommend testing upgrades in a test
environment to gain confidence. rolling back would be an absolute worst
case.


On Mon, Jun 29, 2015 at 4:18 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Hi James,

Thanks a lot , Please could you tell us what is the clean way of
doing rollback from v210 to v202.

Regards
Lingesh M

On Mon, Jun 29, 2015 at 5:58 PM, James Bayer <jbayer(a)pivotal.io>
wrote:

when you upgrade to a newer version of cf-release, it performs
database migrations. the message is likely telling you that cf-release v202
code in the cloud controller is not compatible with the db migrations that
were performed when upgrading to cf-release v210.

On Mon, Jun 29, 2015 at 2:53 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Hello Team ,

we are able to upgrade cf 202 to v210 in development environment ,
incase of any unknown issue we may want to rollback to 202. So we are
trying to rollback from 210 to 202. But bosh not able to complete the
rollback successfully. we are getting below error from bosh.

Error :

Started updating job api_z1
Started updating job api_z1 > api_z1/0 (canary). Failed:
`api_z1/0' is not running after update (00:14:53)

Error 400007: `api_z1/0' is not running after update


even we are able to ssh on api_z1 successfully. but found below
issue in cloud_controller_ng .

monit summary
The Monit daemon 5.2.4 uptime: 13m

Process 'cloud_controller_ng' Execution failed
Process 'cloud_controller_worker_local_1' not monitored
Process 'cloud_controller_worker_local_2' not monitored
Process 'nginx_cc' not monitored
Process 'metron_agent' running
Process 'check_mk' running
System 'system_6e1e4d43-f677-4dc6-ab8a-5b6152504918' running

logs from : /var/vcap/sys/log/cloud_controller_ng_ctl.err.log

[2015-06-29 21:18:55+0000] Tasks: TOP => db:migrate
[2015-06-29 21:18:55+0000] (See full trace by running task with
--trace)
[2015-06-29 21:19:39+0000] ------------ STARTING
cloud_controller_ng_ctl at Mon Jun 29 21:19:36 UTC 2015 --------------
[2015-06-29 21:19:39+0000] rake aborted!
[2015-06-29 21:19:39+0000] Sequel::Migrator::Error: Applied
migration files not in file system:
20150306233007_increase_size_of_delayed_job_handler.rb,
20150311204445_add_desired_state_to_v3_apps.rb,
20150313233039_create_apps_v3_routes.rb,
20150316184259_create_service_key_table.rb,
20150318185941_add_encrypted_environment_variables_to_apps_v3.rb,
20150319150641_add_encrypted_environment_variables_to_v3_droplets.rb,
20150323170053_change_service_instance_description_to_text.rb,
20150323234355_recreate_apps_v3_routes.rb,
20150324232809_add_fk_v3_apps_packages_droplets_processes.rb,
20150325224808_add_v3_attrs_to_app_usage_events.rb,
20150327080540_add_cached_docker_image_to_droplets.rb,
20150403175058_add_index_to_droplets_droplet_hash.rb,
20150403190653_add_procfile_to_droplets.rb,
20150407213536_add_index_to_stack_id.rb,
20150421190248_add_allow_ssh_to_app.rb, 20150422000255_route_path_field.rb,
20150430214950_add_allow_ssh_to_spaces.rb,
20150501181106_rename_apps_allow_ssh_to_enable_ssh.rb,
20150514190458_fix_mysql_collations.rb,
20150515230939_add_case_insensitive_to_route_path.rb
cloud_controller_ng_ctl.err.log




Please any idea is some thing problem with rollback scripts during
rollback ??.

Regards
Lingesh M

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: What is "Package Staging" in V3?

Noburou TANIGUCHI
 

Thank you, James.

I've read them and probably understood what's in progress.
The pdf in the latter URL is very helpful.




--
View this message in context: http://cf-dev.70369.x6.nabble.com/What-is-Package-Staging-in-V3-tp620p626.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: CF rollback issue from v210 to v202

CF Runtime
 

Simply deleting the schema_migrations table entries may allow the api to
start, but the actual changes from those migrations will need to be rolled
back manually too.

Best thing is certainly to backup and restore the database if you rollback.
Could you give more details on why that didn't work? You may also need to
backup and restore the uaadb database.

Joseph & Zak
CF Runtime Team

On Tue, Jun 30, 2015 at 8:53 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Thanks Ning,

Even we have tried the same approach (deleted unwanted file entries and
kept v202 migration related files in schema_migration table) to get back
to v202. but after that we were in v202 and not able to upgrade to v210.
since manually we have deleted those entries in ccdb some what again
api_z1/0 is not started up. because of that cloud_controller_ng monit
process not started properly.

So even we have tried ccdb snapshot restore option from AWS RDS as well no
luck on it.

after downgrading version v202 / other version by any chance will you able
to upgrade to higher version without any issues ??

Regards
Lingesh M

On Tue, Jun 30, 2015 at 9:24 AM, Ning Fu <nfu(a)pivotal.io> wrote:

We encountered the same problem today, and the solution is to delete the
records of those files from a table(schema_migrations) in ccdb.

The files are located under cloud_controller_ng/db/migrations/. But it
seems ccdb is used as a file name reference.

Regards,
Ning

On Tue, Jun 30, 2015 at 8:51 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Thanks a lot James. we will try it out.

On Tue, Jun 30, 2015 at 12:54 AM, James Bayer <jbayer(a)pivotal.io> wrote:

if you backup the databases before the upgrade, then you could restore
the databases before the rollback deployment. we don't ever rollback at
pivotal, we roll forward with fixes. i recommend testing upgrades in a test
environment to gain confidence. rolling back would be an absolute worst
case.


On Mon, Jun 29, 2015 at 4:18 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Hi James,

Thanks a lot , Please could you tell us what is the clean way of doing
rollback from v210 to v202.

Regards
Lingesh M

On Mon, Jun 29, 2015 at 5:58 PM, James Bayer <jbayer(a)pivotal.io>
wrote:

when you upgrade to a newer version of cf-release, it performs
database migrations. the message is likely telling you that cf-release v202
code in the cloud controller is not compatible with the db migrations that
were performed when upgrading to cf-release v210.

On Mon, Jun 29, 2015 at 2:53 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Hello Team ,

we are able to upgrade cf 202 to v210 in development environment ,
incase of any unknown issue we may want to rollback to 202. So we are
trying to rollback from 210 to 202. But bosh not able to complete the
rollback successfully. we are getting below error from bosh.

Error :

Started updating job api_z1
Started updating job api_z1 > api_z1/0 (canary). Failed:
`api_z1/0' is not running after update (00:14:53)

Error 400007: `api_z1/0' is not running after update


even we are able to ssh on api_z1 successfully. but found below
issue in cloud_controller_ng .

monit summary
The Monit daemon 5.2.4 uptime: 13m

Process 'cloud_controller_ng' Execution failed
Process 'cloud_controller_worker_local_1' not monitored
Process 'cloud_controller_worker_local_2' not monitored
Process 'nginx_cc' not monitored
Process 'metron_agent' running
Process 'check_mk' running
System 'system_6e1e4d43-f677-4dc6-ab8a-5b6152504918' running

logs from : /var/vcap/sys/log/cloud_controller_ng_ctl.err.log

[2015-06-29 21:18:55+0000] Tasks: TOP => db:migrate
[2015-06-29 21:18:55+0000] (See full trace by running task with
--trace)
[2015-06-29 21:19:39+0000] ------------ STARTING
cloud_controller_ng_ctl at Mon Jun 29 21:19:36 UTC 2015 --------------
[2015-06-29 21:19:39+0000] rake aborted!
[2015-06-29 21:19:39+0000] Sequel::Migrator::Error: Applied
migration files not in file system:
20150306233007_increase_size_of_delayed_job_handler.rb,
20150311204445_add_desired_state_to_v3_apps.rb,
20150313233039_create_apps_v3_routes.rb,
20150316184259_create_service_key_table.rb,
20150318185941_add_encrypted_environment_variables_to_apps_v3.rb,
20150319150641_add_encrypted_environment_variables_to_v3_droplets.rb,
20150323170053_change_service_instance_description_to_text.rb,
20150323234355_recreate_apps_v3_routes.rb,
20150324232809_add_fk_v3_apps_packages_droplets_processes.rb,
20150325224808_add_v3_attrs_to_app_usage_events.rb,
20150327080540_add_cached_docker_image_to_droplets.rb,
20150403175058_add_index_to_droplets_droplet_hash.rb,
20150403190653_add_procfile_to_droplets.rb,
20150407213536_add_index_to_stack_id.rb,
20150421190248_add_allow_ssh_to_app.rb, 20150422000255_route_path_field.rb,
20150430214950_add_allow_ssh_to_spaces.rb,
20150501181106_rename_apps_allow_ssh_to_enable_ssh.rb,
20150514190458_fix_mysql_collations.rb,
20150515230939_add_case_insensitive_to_route_path.rb
cloud_controller_ng_ctl.err.log




Please any idea is some thing problem with rollback scripts during
rollback ??.

Regards
Lingesh M

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Increasing warden yml network and user pool size

CF Runtime
 

I believe that those limits would only need to be increased if you have
more than 256 warden containers on a single server. Is that the case?

Joseph & Zak
CF Runtime Team

On Wed, Jul 1, 2015 at 4:29 PM, Animesh Singh <animation2007(a)gmail.com>
wrote:

We are seeing some performance bottlenecks at warden, and at times warden
drops all connections under increasing load. We think increasing this
network and user pool_size might help. We have tried effecting those
changes thorugh CF YML, but they arent getting set.

Any clues on how can we get this effective?

sudo more
./var/vcap/data/jobs/dea_next/a25eb00c949666d87c19508cc917f1601a5c5ba8-136
0a7f1564ff515d5948677293e3aa209712f4f/config/warden.yml

---

server:

unix_domain_permissions: 0777

unix_domain_path: /var/vcap/data/warden/warden.sock

container_klass: Warden::Container::Linux

container_rootfs_path: /var/vcap/packages/rootfs_lucid64

container_depot_path: /var/vcap/data/warden/depot

container_rlimits:

core: 0

pidfile: /var/vcap/sys/run/warden/warden.pid

quota:

disk_quota_enabled: true


logging:

file: /var/vcap/sys/log/warden/warden.log

level: info

syslog: vcap.warden


health_check_server:

port: 2345


network:

pool_start_address: 10.254.0.0

pool_size: 256




# Interface MTU size

# (for OpenStack use 1454 to avoid problems with rubygems with GRE
tunneling)

mtu: 1400


user:

pool_start_uid: 20000

pool_size: 256




Thanks,

Animesh


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Moving GoRouter and Routing API in CF-Release

Christopher Piraino <cpiraino@...>
 

Hello all,

We have just moved the gorouter and routing-api components from
src/{gorouter, routing-api} to src/github.com/cloudfoundry/gorouter and src/
github.com/cloudfoundry-incubator/routing-api in order to help facilitate
development of them within cf-release.

While doing this, we added direnv support to both projects which will
append the cf-release/ directory path to your GOPATH. We are still using
godeps in both projects as well. The README.md for both projects has been
updated with this information.

If you encounter any trouble with this, please let us know!

Thanks,
Chris and Leo, CF Routing Team


Re: What is "Package Staging" in V3?

James Myers
 

One of the goals of V3 is that we are exposing a few internal CF concepts
as domain objects in the API. Specifically droplets and packages.

In V2, after you have pushed applications bits (created a package), the
next time that you start your application, if your package has not been
staged, staging will occur automatically.

We approached this differently in V3. Now you create a package (uploading
bits) and then you explicitly stage that package creating a runnable
droplet which you then can assign to your application. Thus the commit that
you see is the beginning of staging packages independently in V3.

You can learn more about this feature from this story(
https://www.pivotaltracker.com/story/show/84894476) and more about V3 from
this epic (https://www.pivotaltracker.com/epic/show/1334418).

Please keep in mind that V3 is still experimental and a work in progress.

Best,

Jim

On Wed, Jul 1, 2015 at 7:29 PM, nota-ja <dev(a)nota.m001.jp> wrote:

Hi,

I occasionally have found this commit (included in v198 release) recently:

https://github.com/cloudfoundry/cloud_controller_ng/commit/57f4c04cb49c67a8d9b128f72906e34d8782f547

The commit message says that it is "Initial v3 package staging".

I have searched the release notes (v198 and after), the Web, and this
mailing list but I can't find the word "package staging".

I want to know the entire story of this feature (because the commit says
"Initial"). Is it referred in other words? Is there any point I missed?
Please give me a clue (the actual feature name, URLs, related commits,
etc.)
to research about it.

Thanks.



--
View this message in context:
http://cf-dev.70369.x6.nabble.com/What-is-Package-Staging-in-V3-tp620.html
Sent from the CF Dev mailing list archive at Nabble.com.
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: SSH access to CF app instances on Diego

James Myers
 

I have to agree with Matt on this one. I feel that the recycling of
containers is a very anti-developer default. When you approach Cloud
Foundry from the perspective of running production applications the recycle
policy makes complete sense. However, I feel that this misses out on one of
the massive benefits/use cases of Cloud Foundry, what it offers to the
development process.

From a security stand point, if you can ssh into a container, it means you
have write access to the application in CloudFoundry. Thus you can already
push new bits/change the application in question. All of the "papertrail"
functionality around pushing/changing applications exists for SSH as well
(we record events, output log lines, make it visible to users that action
was taken on the application), and thus concerned operators would be able
to determine if someone modifying the application in question.

Therefore I'm lost on how this is truly the most secure default. If we are
really going by the idea that all defaults should be the most secure, ssh
should be disabled by default.

As a developer, I can see many times in which I would want to be able to
ssh into my container and change my application as part of a
troubleshooting process. Using BOSH as an example, CF Devs constantly ssh
into VMs and change the processes running on them in order to facilitate
development. BOSH does not reap the VM and redeploy a new instance when you
have closed the SSH session. Once again this is largely due to the fact
that if you have SSH access, you can already perform the necessary actions
to change the application through different means.

Another huge hindrance to development, is that the recycling policy is
controlled by administrators. It is not something that normal users can
control, even though we allow the granularity of enabling/disabling SSH
completely to the end user. This seems counterintuitive.

I feel that a better solution would be to provide the user with some
knowledge of which instances may be tainted, and then allowing them to opt
into a policy which will reap tainted containers. This provides users with
clear insight that their application instance may be a snowflake (and that
they may want to take action), while also allowing normal behavior with
regards to SSH access to containers.

To summarize, by enabling the recycling policy by default we not only
produce extremely unusual behavior / workflows for developers, we are also
minimizing the developer-friendliness of CF in general. This mixed with the
fact that as a user I cannot even control this policy, leads me to believe
that as a default recycling should be turned off as it provides the most
cohesive and friendly user experience.

On Mon, Jun 29, 2015 at 9:14 AM, John Wong <gokoproject(a)gmail.com> wrote:

after executing a command, concluding an interactive session, or copying
a file into an instance, that instance will be restarted.

How does it monitor the behavior? Is there a list of commands whitelisted?
I am curious because I am trying to find out what the whitelist contain.
Also is it at the end of the bosh ssh APP_NAME session? What if two users
are there simultaneously?

Thanks.

On Mon, Jun 29, 2015 at 5:49 AM, Dieu Cao <dcao(a)pivotal.io> wrote:

I think with the CLI we could add clarifying messaging when using ssh
what the current policy around recycling is.
Eric, what do you think about calling it the "recycling" policy, enabled
by default? =D

-Dieu


On Sat, Jun 27, 2015 at 3:42 AM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

Depends on your role and where your app is in the deployment pipeline.
Most of the scenarios I envisioned were for the tail end of development
where you need to poke around to debug and figure out those last few
problems.

For example, Ryan Morgan was saying that the Cloud Foundry plugin for
eclipse is going to be using the ssh support in diego to enable debug of
application instances in the context of a buildpack deployed app. This is
aligned with other requirements I've heard from people working on dev tools.

As apps reach production, I would hope that interactive ssh is disabled
entirely on the prod space leaving only scp in source mode as an option
(something the proxy can do).

Between dev and prod, there's a spectrum, but in general, I either
expect access to be enabled or disabled - not enabled with a suicidal
tendency.

On Thu, Jun 25, 2015 at 10:53 PM, Benjamin Black <bblack(a)pivotal.io>
wrote:

matt,

could you elaborate a bit on what you believe ssh access to instances
is for?


b


On Thu, Jun 25, 2015 at 9:29 PM, Matthew Sykes <matthew.sykes(a)gmail.com
wrote:
My concern is the default behavior.

When I first prototyped this support in February, I never expected
that merely accessing a container would cause it to be terminated. As we
can see from Jan's response, it's completely unexpected; many others have
the same reaction.

I do not believe that this behavior should be part of the default
configuration and I do believe the control needs to be at the space level.
I have have already expressed this opinion during Diego retros and at the
runtime PMC meeting.

I honestly believe that if we were talking about applying this
behavior to `bosh ssh` and `bosh scp`, few would even consider running in a
'kill on taint mode' because of how useful it is. We should learn from that.

If this behavior becomes the default, I think our platform will be
seen as moving from opinionated to parochial. That would be unfortunate.


On Thu, Jun 25, 2015 at 6:05 PM, James Bayer <jbayer(a)pivotal.io>
wrote:

you can turn the "restart tainted containers" feature off with
configuration if you are authorized to do so. then using scp to write files
into a container would be persisted for the lifetime of the container even
after the ssh session ends.

On Thu, Jun 25, 2015 at 5:50 PM, Jan Dubois <jand(a)activestate.com>
wrote:

On Thu, Jun 25, 2015 at 5:36 PM, Eric Malm <emalm(a)pivotal.io> wrote:
after executing a command, concluding an
interactive session, or copying a file into an instance, that
instance will
be restarted.
What is the purpose of being able to copy a file into an instance if
the instance is restarted as soon as the file has been received?

Cheers,
-Jan
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Matthew Sykes
matthew.sykes(a)gmail.com

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Matthew Sykes
matthew.sykes(a)gmail.com

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


What is "Package Staging" in V3?

Noburou TANIGUCHI
 

Hi,

I occasionally have found this commit (included in v198 release) recently:
https://github.com/cloudfoundry/cloud_controller_ng/commit/57f4c04cb49c67a8d9b128f72906e34d8782f547

The commit message says that it is "Initial v3 package staging".

I have searched the release notes (v198 and after), the Web, and this
mailing list but I can't find the word "package staging".

I want to know the entire story of this feature (because the commit says
"Initial"). Is it referred in other words? Is there any point I missed?
Please give me a clue (the actual feature name, URLs, related commits, etc.)
to research about it.

Thanks.



--
View this message in context: http://cf-dev.70369.x6.nabble.com/What-is-Package-Staging-in-V3-tp620.html
Sent from the CF Dev mailing list archive at Nabble.com.


Increasing warden yml network and user pool size

Animesh Singh
 

We are seeing some performance bottlenecks at warden, and at times warden
drops all connections under increasing load. We think increasing this
network and user pool_size might help. We have tried effecting those
changes thorugh CF YML, but they arent getting set.

Any clues on how can we get this effective?

sudo more
./var/vcap/data/jobs/dea_next/a25eb00c949666d87c19508cc917f1601a5c5ba8-136
0a7f1564ff515d5948677293e3aa209712f4f/config/warden.yml

---

server:

unix_domain_permissions: 0777

unix_domain_path: /var/vcap/data/warden/warden.sock

container_klass: Warden::Container::Linux

container_rootfs_path: /var/vcap/packages/rootfs_lucid64

container_depot_path: /var/vcap/data/warden/depot

container_rlimits:

core: 0

pidfile: /var/vcap/sys/run/warden/warden.pid

quota:

disk_quota_enabled: true


logging:

file: /var/vcap/sys/log/warden/warden.log

level: info

syslog: vcap.warden


health_check_server:

port: 2345


network:

pool_start_address: 10.254.0.0

pool_size: 256




# Interface MTU size

# (for OpenStack use 1454 to avoid problems with rubygems with GRE
tunneling)

mtu: 1400


user:

pool_start_uid: 20000

pool_size: 256




Thanks,

Animesh


Re: Installing Diego feedback

Mike Heath
 

On Wed, Jul 1, 2015 at 2:46 PM Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Mike,

Thanks for the feedback! Responses inline below.

On Tue, Jun 30, 2015 at 5:05 PM, Mike Heath <elcapo(a)gmail.com> wrote:

I just got Diego successfully integrated and deployed in my Cloud Foundry
dev environment. Here's a bit of feedback.

One of the really nice features of BOSH is that you can set a property
once and any job that needs that property can consume it. Unfortunately,
the Diego release takes this beautiful feature and throws it out the
window. The per-job name spaced properties suck. Sure this would be easier
if I were using Spiff but our existing deployments don't use Spiff. Unless
Spiff is the only supported option for using the Diego BOSH release, the
Diego release properties need to be fixed to avoid the mass duplication and
properties that much up with properties in cf-release should be renamed. I
spent more time matching up duplicate properties than anything else which
is unfortunate since BOSH should have relieved me of this pain.
We intentionally decided to namespace these component properties very
early on in the development of diego-release: initially everything was
collapsed, as it is in cf-release, and then when we integrated against
cf-release deployments and their manifests, we ended up with some property
collisions, especially with etcd. Consequently, we took the opposite tack
and scoped all those properties to the individual diego components to keep
them decoupled. I've generally found it helpful to think of them as 'input
slots' to each specific job, with the authoritative input value coming from
some other source (often a cf-release property), but as you point out that
can be painful and error-prone without another tool such as spiff to
propagate the values. As we explore how we might reorganize parts of
cf-release and diego-release into more granular releases designed for
composition, and as BOSH links emerge to give us richer semantics about how
to flow property information between jobs, we'll iterate on these patterns.
As an immediate workaround, you could also use YAML anchors and aliases to
propagate those values in your hand-crafted manifest.
So, I certainly like the idea of namespacing Diego specific properties. The
job level granularity is excessive though. cf-release is also very old so a
lot of its properties could be rethought/reorganized. Just warn us when you
make changes. :)

And yeah, I'm already using anchors and aliases all over.




SSH Proxy doesn't support 2048 bit RSA keys. I get this error:

{"timestamp":"1435189129.986424685","source":"ssh-proxy","message":"ssh-proxy.failed-to-parse-host-key","log_level":3,"data":{"error":"crypto/rsa:
invalid exponents","trace":"goroutine 1 [running]:\
ngithub.com/pivotal-golang/lager.(*logger).Fatal(0xc2080640c0, 0x8eba10,
0x18, 0x7fa802383b00, 0xc20802ad80, 0x0, 0x0,
0x0)\n\t/var/vcap/packages/ssh_proxy/src/
github.com/pivotal-golang/lager/logger.go:131
+0xc8\nmain.configure(0x7fa8023886e0, 0xc2080640c0, 0x7fa8023886e0, 0x0,
0x0)\n\t/var/vcap/packages/ssh_proxy/src/
github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:167
+0xacb\nmain.main()\n\t/var/vcap/packages/ssh_proxy/src/
github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:75
+0xb4\n"}}

1024-bit keys work just fine.

The *.cc.external_port properties should have a default value (9022) just
like cc.external_port does in the cloud_controller_ng job in cf-release.

In the receptor job, there's a property diego.receptor.nats.username but
every other job (in cf-release and diego-release) uses nats.user rather
than nats.username.
We could standardize on nats.user everywhere (the route-emitter needs
these NATS properties, too, and it also currently uses nats.username). I
also think it makes sense to supply that default CC port in the job specs
and to make sure our spiff templates supply overrides from the cf manifest
correctly. I'll add a story to straighten these out.


Rather than deploy two etcd jobs, I'm just using the etcd job provided by
cf-release. Is there a reason not to do this? Everything appears to be
working fine. I haven't yet run the DATs though.
I agree with Matt: these two etcd clusters will soon become operationally
distinct as we secure access to Diego's internal etcd. I don't believe
anything will currently collide in the keyspace, but we also can't make
strong guarantees about that.
Thanks for the clarification. If anythings colliding the keyspace, I
haven't found it yet. :) I'll fix my deployment.




Consul is great and all but in my dev environment the Consul server
crashed a couple of times and it took a while to discover that the reason
CF crapped out was was because Consul DNS lookups were broken. Is Consul a
strategic solution or is it just a stop gap until BOSH Links are ready? (I
would prefer removing Consul in favor of BOSH links, for the record.)
So far, Consul has provided us with a level of dynamic DNS-based service
discovery beyond what it sounds like BOSH links can: for example, if one of
the receptors is down for some reason, it's removed from the
consul-provided DNS entries in a matter of seconds. That said, we're also
exploring other options to provide that type of service discovery, such as
etcd-backed SkyDNS.
Yeah, that makes sense. I suppose I'm used to everything in cf-release
going through the Gorouter for automatic fail-over. Thanks for the response.



Thanks,
Eric
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: UAA and ADFS

Filip Hanik
 

we don't use 'saml' as a profile anymore. that is gone. if it exists in
documentation we must remove it

On Wed, Jul 1, 2015 at 3:10 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:

change

spring_profiles: saml

to

spring_profiles: default

On Wed, Jul 1, 2015 at 3:08 PM, Khan, Maaz <Maaz.Khan(a)emc.com> wrote:

Hi Filip,



Thanks for the links.

Here is what I did.



Checked out UAA code from git.

In resource/uaa.yml file I modified to reflect the use of SAML

spring_profiles: saml



In login.yml I have populated these entries:

saml:

entityID: https://qeadfs1.qengis.xxxxxx.com/adfs/services/trust

nameID: 'urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified'

assertionConsumerIndex: 0

signMetaData: true

signRequest: true

socket:

# URL metadata fetch - pool timeout

connectionManagerTimeout: 10000

# URL metadata fetch - read timeout

soTimeout: 10000

#BEGIN SAML PROVIDERS

providers:

openam-local:

idpMetadata:
https://qeadfs1.qengis.xxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml

nameID: urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress

assertionConsumerIndex: 0

signMetaData: false

signRequest: false

showSamlLoginLink: true

linkText: 'Log in with OpenAM'



Now when I run UAA locally and hit the URL
http://localhost:8080/uaa/login I get this error

org.springframework.beans.factory.BeanCreationException: Error creating
bean with name 'applicationProperties' defined in class path resource
[spring/env.xml]: Cannot resolve reference to bean 'platformProperties'
while setting bean property 'propertiesArray' with key [0]; nested
exception is
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean
named 'platformProperties' is defined



Given that I have Entity ID –
https://qeadfs1.qengis.xxxxxx.com/adfs/services/trust

And federated metadata from ADFS – :
https://qeadfs1.qengis.xxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml

What will be the correct steps to integrate with ADFS?



Thanks

Maaz









_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: UAA and ADFS

Filip Hanik
 

change

spring_profiles: saml

to

spring_profiles: default

On Wed, Jul 1, 2015 at 3:08 PM, Khan, Maaz <Maaz.Khan(a)emc.com> wrote:

Hi Filip,



Thanks for the links.

Here is what I did.



Checked out UAA code from git.

In resource/uaa.yml file I modified to reflect the use of SAML

spring_profiles: saml



In login.yml I have populated these entries:

saml:

entityID: https://qeadfs1.qengis.xxxxxx.com/adfs/services/trust

nameID: 'urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified'

assertionConsumerIndex: 0

signMetaData: true

signRequest: true

socket:

# URL metadata fetch - pool timeout

connectionManagerTimeout: 10000

# URL metadata fetch - read timeout

soTimeout: 10000

#BEGIN SAML PROVIDERS

providers:

openam-local:

idpMetadata:
https://qeadfs1.qengis.xxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml

nameID: urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress

assertionConsumerIndex: 0

signMetaData: false

signRequest: false

showSamlLoginLink: true

linkText: 'Log in with OpenAM'



Now when I run UAA locally and hit the URL http://localhost:8080/uaa/login
I get this error

org.springframework.beans.factory.BeanCreationException: Error creating
bean with name 'applicationProperties' defined in class path resource
[spring/env.xml]: Cannot resolve reference to bean 'platformProperties'
while setting bean property 'propertiesArray' with key [0]; nested
exception is
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean
named 'platformProperties' is defined



Given that I have Entity ID –
https://qeadfs1.qengis.xxxxxx.com/adfs/services/trust

And federated metadata from ADFS – :
https://qeadfs1.qengis.xxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml

What will be the correct steps to integrate with ADFS?



Thanks

Maaz









_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: UAA and ADFS

Maaz
 

Hi Filip,

Thanks for the links.
Here is what I did.

Checked out UAA code from git.
In resource/uaa.yml file I modified to reflect the use of SAML
spring_profiles: saml

In login.yml I have populated these entries:
saml:
entityID: https://qeadfs1.qengis.xxxxxx.com/adfs/services/trust
nameID: 'urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified'
assertionConsumerIndex: 0
signMetaData: true
signRequest: true
socket:
# URL metadata fetch - pool timeout
connectionManagerTimeout: 10000
# URL metadata fetch - read timeout
soTimeout: 10000
#BEGIN SAML PROVIDERS
providers:
openam-local:
idpMetadata: https://qeadfs1.qengis.xxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml
nameID: urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
assertionConsumerIndex: 0
signMetaData: false
signRequest: false
showSamlLoginLink: true
linkText: 'Log in with OpenAM'

Now when I run UAA locally and hit the URL http://localhost:8080/uaa/login I get this error
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'applicationProperties' defined in class path resource [spring/env.xml]: Cannot resolve reference to bean 'platformProperties' while setting bean property 'propertiesArray' with key [0]; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'platformProperties' is defined

Given that I have Entity ID - https://qeadfs1.qengis.xxxxxx.com/adfs/services/trust
And federated metadata from ADFS - : https://qeadfs1.qengis.xxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml
What will be the correct steps to integrate with ADFS?

Thanks
Maaz


Re: Can't create/update buildpacks, "a filename must be specified"

CF Runtime
 

Hi Kyle,

The fundamental issue with not using Nginx is that all uploads/downloads
block the cloud controller instance. A long blocking request to the CC can
be a serious issue in any CF environment. As such, all instances of the CC
should be deployed with Nginx enabled and activated.

Best,
Zachary Auerbach, CF Runtime Team.

On Wed, Jul 1, 2015 at 12:09 PM, kyle havlovitz <kylehav(a)gmail.com> wrote:

What will be lost by not having nginx? I've had it disabled and haven't
seen other problems before this.

On Tue, Jun 30, 2015 at 7:17 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

Hi Kyle,

This component is specifically designed to work with Nginx. Despite the
fact that you can successfully upload a buildpack by making a small change
with Nginx disabled there are many other areas where not having Nginx will
severely cripple the functionality of the Cloud Controller.

Why are you trying to deploy a CC without Nginx?

Zachary Auerbach, CF Runtime Team.

On Tue, Jun 30, 2015 at 3:21 PM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

I know it's recommended, but uploading buildpacks seems to just be plain
broken without it (though I fixed it by changing 1 line of code in the
cloud controller). The question is, is this supposed to work or is this
something broken that I should make a PR for?

On Tue, Jun 30, 2015 at 5:56 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

Hi Kyle,

We highly recommend using Nginx as a proxy for uploads and downloads
to/from the cloud controller. Without it all long-running data transfers to
the CC will block that instance of the cloud controller.

It's possible, but may have unintended and unsupported side-effects.

Best,
Zachary Auerbach, CF Runtime Team.

On Tue, Jun 30, 2015 at 10:45 AM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

The thing is, I got it to work with use_nginx set to false just by
modifying one line of code in buildpack_bits_controller.rb. Couldn't the
code just be changed to support this?

On Tue, Jun 30, 2015 at 1:36 PM, Dieu Cao <dcao(a)pivotal.io> wrote:

Yes, nginx is required.

-Dieu

On Tue, Jun 30, 2015 at 3:32 PM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

Yes, I have nginx disabled, would that cause problems uploading a
buildpack like this?

On Mon, Jun 29, 2015 at 9:18 PM, Matthew Sykes <
matthew.sykes(a)gmail.com> wrote:

You may need to supply your access log from the nginx in front of
cc or the cc log because when I create a new buildpack, it's working just
fine:

$ CF_TRACE=true cf create-buildpack test-binary-bp
./binary_buildpack-cached-v1.0.1.zip 1 --enable


VERSION:

6.11.3-cebadc9


Creating buildpack test-binary-bp...


REQUEST: [2015-06-29T20:10:37-04:00]

POST /v2/buildpacks?async=true HTTP/1.1

Host: api.10.244.0.34.xip.io

Accept: application/json

Authorization: [PRIVATE DATA HIDDEN]

Content-Type: application/json

User-Agent: go-cli 6.11.3-cebadc9 / darwin


{"name":"test-binary-bp","position":1,"enabled":true}


RESPONSE: [2015-06-29T20:10:37-04:00]

HTTP/1.1 201 Created

Content-Length: 337

Content-Type: application/json;charset=utf-8

Date: Tue, 30 Jun 2015 00:10:37 GMT

Location: /v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b

Server: nginx

X-Cf-Requestid: 49dc1a83-c37a-4311-66e5-5d2a2aea5df3

X-Content-Type-Options: nosniff

X-Vcap-Request-Id:
c7ac7b0c-9261-4b2b-7df6-d7788ba26827::168b561c-4e58-4f7c-9bf4-50ac6589522c


{

"metadata": {

"guid": "16e73f3c-3980-4603-ba07-8e5b08b78f7b",

"url": "/v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b",

"created_at": "2015-06-30T00:10:37Z",

"updated_at": null

},

"entity": {

"name": "test-binary-bp",

"position": 1,

"enabled": true,

"locked": false,

"filename": null

}

}

OK


Uploading buildpack test-binary-bp...


REQUEST: [2015-06-29T20:10:37-04:00]

PUT /v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b/bits
HTTP/1.1

Host: api.10.244.0.34.xip.io

Accept: application/json

Authorization: [PRIVATE DATA HIDDEN]

Content-Type: multipart/form-data;
boundary=a63345d0d8a03bcdf636aed591aa2d57acfe2e910bcc2a3835ed609c270f

User-Agent: go-cli 6.11.3-cebadc9 / darwin



[MULTIPART/FORM-DATA CONTENT HIDDEN]

Done uploading


RESPONSE: [2015-06-29T20:10:37-04:00]

HTTP/1.1 201 Created

Content-Length: 387

Content-Type: application/json;charset=utf-8

Date: Tue, 30 Jun 2015 00:10:37 GMT

Server: nginx

X-Cf-Requestid: dd6cff31-5d91-4730-6f46-cd6e085bd007

X-Content-Type-Options: nosniff

X-Vcap-Request-Id:
f5db441f-1293-429a-460a-74eb71cffaeb::c0a244bf-a50b-47d3-b2f1-cbab01a3d22a


{

"metadata": {

"guid": "16e73f3c-3980-4603-ba07-8e5b08b78f7b",

"url": "/v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b",

"created_at": "2015-06-30T00:10:37Z",

"updated_at": "2015-06-30T00:10:37Z"

},

"entity": {

"name": "test-binary-bp",

"position": 1,

"enabled": true,

"locked": false,

"filename": "binary_buildpack-cached-v1.0.1.zip"

}

}

OK

✓ $ cf buildpacks

Getting buildpacks...


buildpack position enabled locked filename

test-binary-bp 1 true false
binary_buildpack-cached-v1.0.1.zip

staticfile_buildpack 2 true false
staticfile_buildpack-cached-v1.2.0.zip

java_buildpack 3 true false
java-buildpack-v3.0.zip

ruby_buildpack 4 true false
ruby_buildpack-cached-v1.4.2.zip

nodejs_buildpack 5 true false
nodejs_buildpack-cached-v1.3.4.zip

go_buildpack 6 true false
go_buildpack-cached-v1.4.0.zip

python_buildpack 7 true false
python_buildpack-cached-v1.4.0.zip

php_buildpack 8 true false
php_buildpack-cached-v3.3.0.zip

binary_buildpack 9 true false
binary_buildpack-cached-v1.0.1.zip

✓ $ cf --version

cf version 6.11.3-cebadc9-2015-05-20T18:59:33+00:00

For buildpacks, nginx handles most of the heavy lifting and then
passes modified parameters to the cc for processing. The upload processor
then uses the modified params to do the right thing...

Are you running a non-standard configuration that doesn't use nginx
to frontend cc?

On Mon, Jun 29, 2015 at 3:22 PM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

After some more digging I found that it seems to be a problem in
https://github.com/cloudfoundry/cloud_controller_ng/blob/master/app/controllers/runtime/buildpack_bits_controller.rb#L21
.
The 'params' object here is being referenced incorrectly; it
contains a key called 'buildpack' that maps to an object which has a
:filename field which contains the correct buildpack filename, but the code
is trying to reference params['buildpack_name'], which doesn't exist, so it
throws an exception. Changing that above line to say uploaded_filename
= params['buildpack'][:filename] fixed the issue for me. Could this be
caused by my CLI and the cloud controller having out of sync versions? The
api version on the CC is 2.23.0, and tI've been using the 6.11 CLI.

On Mon, Jun 29, 2015 at 9:31 AM, kyle havlovitz <kylehav(a)gmail.com
wrote:
Here's a gist of the output I get and the command I run:
https://gist.github.com/MrEnzyme/7ebd45c9c34151a52050

On Fri, Jun 26, 2015 at 10:58 PM, Matthew Sykes <
matthew.sykes(a)gmail.com> wrote:

It should work since our acceptance tests validate this on every
build we cut [1]. Are you running the operation as someone with a cc admin
scope?

If you want to create a gist with the log (with secrets
redacted) from running `cf` with CF_TRACE=true, we could certainly take a
look.

[1]:
https://github.com/cloudfoundry/cf-acceptance-tests/blob/cdced815f585ef4661b2182799d1d6a7119489b0/apps/app_stack_test.go#L36-L104

On Fri, Jun 26, 2015 at 2:36 PM, kyle havlovitz <
kylehav(a)gmail.com> wrote:

I'm having an issue where I can't upload any buildpack to
cloudfoundry; it says "The buildpack upload is invalid: a filename must be
specified" and the cf_trace confirms it's sending a null value for
filename. The thing is, I have specified a file name every time and get
this error. I've used a few different CLI versions and uploaded different
buildpacks as both zip files/directories, and nothing works. Is this a bug
in the CLI/cloud controller, or am I doing something wrong?

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Matthew Sykes
matthew.sykes(a)gmail.com

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Matthew Sykes
matthew.sykes(a)gmail.com

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Installing Diego feedback

Eric Malm <emalm@...>
 

Hi, Mike,

Thanks for the feedback! Responses inline below.

On Tue, Jun 30, 2015 at 5:05 PM, Mike Heath <elcapo(a)gmail.com> wrote:

I just got Diego successfully integrated and deployed in my Cloud Foundry
dev environment. Here's a bit of feedback.

One of the really nice features of BOSH is that you can set a property
once and any job that needs that property can consume it. Unfortunately,
the Diego release takes this beautiful feature and throws it out the
window. The per-job name spaced properties suck. Sure this would be easier
if I were using Spiff but our existing deployments don't use Spiff. Unless
Spiff is the only supported option for using the Diego BOSH release, the
Diego release properties need to be fixed to avoid the mass duplication and
properties that much up with properties in cf-release should be renamed. I
spent more time matching up duplicate properties than anything else which
is unfortunate since BOSH should have relieved me of this pain.
We intentionally decided to namespace these component properties very early
on in the development of diego-release: initially everything was collapsed,
as it is in cf-release, and then when we integrated against cf-release
deployments and their manifests, we ended up with some property collisions,
especially with etcd. Consequently, we took the opposite tack and scoped
all those properties to the individual diego components to keep them
decoupled. I've generally found it helpful to think of them as 'input
slots' to each specific job, with the authoritative input value coming from
some other source (often a cf-release property), but as you point out that
can be painful and error-prone without another tool such as spiff to
propagate the values. As we explore how we might reorganize parts of
cf-release and diego-release into more granular releases designed for
composition, and as BOSH links emerge to give us richer semantics about how
to flow property information between jobs, we'll iterate on these patterns.
As an immediate workaround, you could also use YAML anchors and aliases to
propagate those values in your hand-crafted manifest.


SSH Proxy doesn't support 2048 bit RSA keys. I get this error:

{"timestamp":"1435189129.986424685","source":"ssh-proxy","message":"ssh-proxy.failed-to-parse-host-key","log_level":3,"data":{"error":"crypto/rsa:
invalid exponents","trace":"goroutine 1 [running]:\
ngithub.com/pivotal-golang/lager.(*logger).Fatal(0xc2080640c0, 0x8eba10,
0x18, 0x7fa802383b00, 0xc20802ad80, 0x0, 0x0,
0x0)\n\t/var/vcap/packages/ssh_proxy/src/
github.com/pivotal-golang/lager/logger.go:131
+0xc8\nmain.configure(0x7fa8023886e0, 0xc2080640c0, 0x7fa8023886e0, 0x0,
0x0)\n\t/var/vcap/packages/ssh_proxy/src/
github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:167
+0xacb\nmain.main()\n\t/var/vcap/packages/ssh_proxy/src/
github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:75
+0xb4\n"}}

1024-bit keys work just fine.

The *.cc.external_port properties should have a default value (9022) just
like cc.external_port does in the cloud_controller_ng job in cf-release.

In the receptor job, there's a property diego.receptor.nats.username but
every other job (in cf-release and diego-release) uses nats.user rather
than nats.username.
We could standardize on nats.user everywhere (the route-emitter needs these
NATS properties, too, and it also currently uses nats.username). I also
think it makes sense to supply that default CC port in the job specs and to
make sure our spiff templates supply overrides from the cf manifest
correctly. I'll add a story to straighten these out.


Rather than deploy two etcd jobs, I'm just using the etcd job provided by
cf-release. Is there a reason not to do this? Everything appears to be
working fine. I haven't yet run the DATs though.
I agree with Matt: these two etcd clusters will soon become operationally
distinct as we secure access to Diego's internal etcd. I don't believe
anything will currently collide in the keyspace, but we also can't make
strong guarantees about that.


Consul is great and all but in my dev environment the Consul server
crashed a couple of times and it took a while to discover that the reason
CF crapped out was was because Consul DNS lookups were broken. Is Consul a
strategic solution or is it just a stop gap until BOSH Links are ready? (I
would prefer removing Consul in favor of BOSH links, for the record.)
So far, Consul has provided us with a level of dynamic DNS-based service
discovery beyond what it sounds like BOSH links can: for example, if one of
the receptors is down for some reason, it's removed from the
consul-provided DNS entries in a matter of seconds. That said, we're also
exploring other options to provide that type of service discovery, such as
etcd-backed SkyDNS.

Thanks,
Eric


Re: Can't create/update buildpacks, "a filename must be specified"

kyle havlovitz <kylehav@...>
 

What will be lost by not having nginx? I've had it disabled and haven't
seen other problems before this.

On Tue, Jun 30, 2015 at 7:17 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

Hi Kyle,

This component is specifically designed to work with Nginx. Despite the
fact that you can successfully upload a buildpack by making a small change
with Nginx disabled there are many other areas where not having Nginx will
severely cripple the functionality of the Cloud Controller.

Why are you trying to deploy a CC without Nginx?

Zachary Auerbach, CF Runtime Team.

On Tue, Jun 30, 2015 at 3:21 PM, kyle havlovitz <kylehav(a)gmail.com> wrote:

I know it's recommended, but uploading buildpacks seems to just be plain
broken without it (though I fixed it by changing 1 line of code in the
cloud controller). The question is, is this supposed to work or is this
something broken that I should make a PR for?

On Tue, Jun 30, 2015 at 5:56 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

Hi Kyle,

We highly recommend using Nginx as a proxy for uploads and downloads
to/from the cloud controller. Without it all long-running data transfers to
the CC will block that instance of the cloud controller.

It's possible, but may have unintended and unsupported side-effects.

Best,
Zachary Auerbach, CF Runtime Team.

On Tue, Jun 30, 2015 at 10:45 AM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

The thing is, I got it to work with use_nginx set to false just by
modifying one line of code in buildpack_bits_controller.rb. Couldn't the
code just be changed to support this?

On Tue, Jun 30, 2015 at 1:36 PM, Dieu Cao <dcao(a)pivotal.io> wrote:

Yes, nginx is required.

-Dieu

On Tue, Jun 30, 2015 at 3:32 PM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

Yes, I have nginx disabled, would that cause problems uploading a
buildpack like this?

On Mon, Jun 29, 2015 at 9:18 PM, Matthew Sykes <
matthew.sykes(a)gmail.com> wrote:

You may need to supply your access log from the nginx in front of cc
or the cc log because when I create a new buildpack, it's working just fine:

$ CF_TRACE=true cf create-buildpack test-binary-bp
./binary_buildpack-cached-v1.0.1.zip 1 --enable


VERSION:

6.11.3-cebadc9


Creating buildpack test-binary-bp...


REQUEST: [2015-06-29T20:10:37-04:00]

POST /v2/buildpacks?async=true HTTP/1.1

Host: api.10.244.0.34.xip.io

Accept: application/json

Authorization: [PRIVATE DATA HIDDEN]

Content-Type: application/json

User-Agent: go-cli 6.11.3-cebadc9 / darwin


{"name":"test-binary-bp","position":1,"enabled":true}


RESPONSE: [2015-06-29T20:10:37-04:00]

HTTP/1.1 201 Created

Content-Length: 337

Content-Type: application/json;charset=utf-8

Date: Tue, 30 Jun 2015 00:10:37 GMT

Location: /v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b

Server: nginx

X-Cf-Requestid: 49dc1a83-c37a-4311-66e5-5d2a2aea5df3

X-Content-Type-Options: nosniff

X-Vcap-Request-Id:
c7ac7b0c-9261-4b2b-7df6-d7788ba26827::168b561c-4e58-4f7c-9bf4-50ac6589522c


{

"metadata": {

"guid": "16e73f3c-3980-4603-ba07-8e5b08b78f7b",

"url": "/v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b",

"created_at": "2015-06-30T00:10:37Z",

"updated_at": null

},

"entity": {

"name": "test-binary-bp",

"position": 1,

"enabled": true,

"locked": false,

"filename": null

}

}

OK


Uploading buildpack test-binary-bp...


REQUEST: [2015-06-29T20:10:37-04:00]

PUT /v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b/bits HTTP/1.1

Host: api.10.244.0.34.xip.io

Accept: application/json

Authorization: [PRIVATE DATA HIDDEN]

Content-Type: multipart/form-data;
boundary=a63345d0d8a03bcdf636aed591aa2d57acfe2e910bcc2a3835ed609c270f

User-Agent: go-cli 6.11.3-cebadc9 / darwin



[MULTIPART/FORM-DATA CONTENT HIDDEN]

Done uploading


RESPONSE: [2015-06-29T20:10:37-04:00]

HTTP/1.1 201 Created

Content-Length: 387

Content-Type: application/json;charset=utf-8

Date: Tue, 30 Jun 2015 00:10:37 GMT

Server: nginx

X-Cf-Requestid: dd6cff31-5d91-4730-6f46-cd6e085bd007

X-Content-Type-Options: nosniff

X-Vcap-Request-Id:
f5db441f-1293-429a-460a-74eb71cffaeb::c0a244bf-a50b-47d3-b2f1-cbab01a3d22a


{

"metadata": {

"guid": "16e73f3c-3980-4603-ba07-8e5b08b78f7b",

"url": "/v2/buildpacks/16e73f3c-3980-4603-ba07-8e5b08b78f7b",

"created_at": "2015-06-30T00:10:37Z",

"updated_at": "2015-06-30T00:10:37Z"

},

"entity": {

"name": "test-binary-bp",

"position": 1,

"enabled": true,

"locked": false,

"filename": "binary_buildpack-cached-v1.0.1.zip"

}

}

OK

✓ $ cf buildpacks

Getting buildpacks...


buildpack position enabled locked filename

test-binary-bp 1 true false
binary_buildpack-cached-v1.0.1.zip

staticfile_buildpack 2 true false
staticfile_buildpack-cached-v1.2.0.zip

java_buildpack 3 true false
java-buildpack-v3.0.zip

ruby_buildpack 4 true false
ruby_buildpack-cached-v1.4.2.zip

nodejs_buildpack 5 true false
nodejs_buildpack-cached-v1.3.4.zip

go_buildpack 6 true false
go_buildpack-cached-v1.4.0.zip

python_buildpack 7 true false
python_buildpack-cached-v1.4.0.zip

php_buildpack 8 true false
php_buildpack-cached-v3.3.0.zip

binary_buildpack 9 true false
binary_buildpack-cached-v1.0.1.zip

✓ $ cf --version

cf version 6.11.3-cebadc9-2015-05-20T18:59:33+00:00

For buildpacks, nginx handles most of the heavy lifting and then
passes modified parameters to the cc for processing. The upload processor
then uses the modified params to do the right thing...

Are you running a non-standard configuration that doesn't use nginx
to frontend cc?

On Mon, Jun 29, 2015 at 3:22 PM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

After some more digging I found that it seems to be a problem in
https://github.com/cloudfoundry/cloud_controller_ng/blob/master/app/controllers/runtime/buildpack_bits_controller.rb#L21
.
The 'params' object here is being referenced incorrectly; it
contains a key called 'buildpack' that maps to an object which has a
:filename field which contains the correct buildpack filename, but the code
is trying to reference params['buildpack_name'], which doesn't exist, so it
throws an exception. Changing that above line to say uploaded_filename
= params['buildpack'][:filename] fixed the issue for me. Could this be
caused by my CLI and the cloud controller having out of sync versions? The
api version on the CC is 2.23.0, and tI've been using the 6.11 CLI.

On Mon, Jun 29, 2015 at 9:31 AM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

Here's a gist of the output I get and the command I run:
https://gist.github.com/MrEnzyme/7ebd45c9c34151a52050

On Fri, Jun 26, 2015 at 10:58 PM, Matthew Sykes <
matthew.sykes(a)gmail.com> wrote:

It should work since our acceptance tests validate this on every
build we cut [1]. Are you running the operation as someone with a cc admin
scope?

If you want to create a gist with the log (with secrets redacted)
from running `cf` with CF_TRACE=true, we could certainly take a look.

[1]:
https://github.com/cloudfoundry/cf-acceptance-tests/blob/cdced815f585ef4661b2182799d1d6a7119489b0/apps/app_stack_test.go#L36-L104

On Fri, Jun 26, 2015 at 2:36 PM, kyle havlovitz <
kylehav(a)gmail.com> wrote:

I'm having an issue where I can't upload any buildpack to
cloudfoundry; it says "The buildpack upload is invalid: a filename must be
specified" and the cf_trace confirms it's sending a null value for
filename. The thing is, I have specified a file name every time and get
this error. I've used a few different CLI versions and uploaded different
buildpacks as both zip files/directories, and nothing works. Is this a bug
in the CLI/cloud controller, or am I doing something wrong?

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Matthew Sykes
matthew.sykes(a)gmail.com

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Matthew Sykes
matthew.sykes(a)gmail.com

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Installing Diego feedback

Mike Heath
 

I have created and used quite a few BOSH releases and the property
namespacing feels very odd to me. I'm very curious to understand the
reasoning behind it.

I can't reproduce my 2048-bit key problem. It must have been some odd fluke
on my end.

Thanks for the response!

-Mike

On Tue, Jun 30, 2015 at 8:37 PM Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

Thanks for the feedback. I'll let others comment on the bosh aspects other
than to say that we are expecting people to use spiff to generate the
manifests and that the decision to namespace properties was intentional.

For the SSH proxy, it absolutely does support 2048 bit RSA keys so I'm not
sure why you ran into a problem. Our bosh-lite template uses a 2014 bit key
and we have tests that use 1024 and 2048 bit keys in CI. If you want to dig
into that, please open an issue.

As for consul, it's TBD whether or not it becomes a strategic solution but
it offers capabilities above and beyond bosh links. We kicked off some work
today to look at recreating the health checks and dns resolution with a sky
dns + etcd solution. If that looks promising, we'll probably go in that
direction.

On the etcd side, it's probably best not to share the two for now. Diego
is in the process of enabling mutual auth over SSL - something that
probably won't be done in cf-release any time soon.

On Tue, Jun 30, 2015 at 8:05 PM, Mike Heath <elcapo(a)gmail.com> wrote:

I just got Diego successfully integrated and deployed in my Cloud Foundry
dev environment. Here's a bit of feedback.

One of the really nice features of BOSH is that you can set a property
once and any job that needs that property can consume it. Unfortunately,
the Diego release takes this beautiful feature and throws it out the
window. The per-job name spaced properties suck. Sure this would be easier
if I were using Spiff but our existing deployments don't use Spiff. Unless
Spiff is the only supported option for using the Diego BOSH release, the
Diego release properties need to be fixed to avoid the mass duplication and
properties that much up with properties in cf-release should be renamed. I
spent more time matching up duplicate properties than anything else which
is unfortunate since BOSH should have relieved me of this pain.

SSH Proxy doesn't support 2048 bit RSA keys. I get this error:

{"timestamp":"1435189129.986424685","source":"ssh-proxy","message":"ssh-proxy.failed-to-parse-host-key","log_level":3,"data":{"error":"crypto/rsa:
invalid exponents","trace":"goroutine 1 [running]:\
ngithub.com/pivotal-golang/lager.(*logger).Fatal(0xc2080640c0, 0x8eba10,
0x18, 0x7fa802383b00, 0xc20802ad80, 0x0, 0x0,
0x0)\n\t/var/vcap/packages/ssh_proxy/src/
github.com/pivotal-golang/lager/logger.go:131
+0xc8\nmain.configure(0x7fa8023886e0, 0xc2080640c0, 0x7fa8023886e0, 0x0,
0x0)\n\t/var/vcap/packages/ssh_proxy/src/
github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:167
+0xacb\nmain.main()\n\t/var/vcap/packages/ssh_proxy/src/
github.com/cloudfoundry-incubator/diego-ssh/cmd/ssh-proxy/main.go:75
+0xb4\n"}}

1024-bit keys work just fine.

The *.cc.external_port properties should have a default value (9022) just
like cc.external_port does in the cloud_controller_ng job in cf-release.

In the receptor job, there's a property diego.receptor.nats.username but
every other job (in cf-release and diego-release) uses nats.user rather
than nats.username.

Rather than deploy two etcd jobs, I'm just using the etcd job provided by
cf-release. Is there a reason not to do this? Everything appears to be
working fine. I haven't yet run the DATs though.

Consul is great and all but in my dev environment the Consul server
crashed a couple of times and it took a while to discover that the reason
CF crapped out was was because Consul DNS lookups were broken. Is Consul a
strategic solution or is it just a stop gap until BOSH Links are ready? (I
would prefer removing Consul in favor of BOSH links, for the record.)

-Mike

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Matthew Sykes
matthew.sykes(a)gmail.com
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: How to update blobs in blob.cfblob.com ?

Matthew Sykes <matthew.sykes@...>
 

Since you won't be able to upload the blobs to the cf-release bucket, I'd
suggest you capture the output of `bosh blobs` in your pull request. That
command should enumerate all of the new blobs and their sizes.

For each entry that's there, point to a publicly available URL and a hash
that can be used to verify it.

When the PR is reviewed, if things look good, the pair will likely pull the
blobs down to evaluate them and test the overall function.

On Wed, Jul 1, 2015 at 6:57 AM, Alexander Lomov <alexander.lomov(a)altoros.com
wrote:
Hi, all.

I work on adding support of Power architecture to CF. During the work I
needed to update not only cf-release, but existing blobs (postgresql, mysql
client and etc.). I wonder how I can make PR to cf-release project with
updated blobs.

I couldn't find any clue in the contributing guild [1] , so I've decided
to write here.

[1] https://github.com/cloudfoundry/cf-release/blob/master/CONTRIBUTING.md

Thank you,
Alex L.

------------------------
Alex Lomov
*Altoros* — Cloud Foundry deployment, training and integration
*Twitter:* @code1n <https://twitter.com/code1n> *GitHub:* @allomov
<https://gist.github.com/allomov>

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

--
Matthew Sykes
matthew.sykes(a)gmail.com

8761 - 8780 of 9422