Date   

Re: Notifications for service brokers

Vineet Banga
 

Thanks Juan, I will try to setup a poller for this to achieve similar
functionality. Do you know if there is already proposal for the better
notifications - if yes, could you point me to it? I Would like to see if
it would meet our needs at some point in the future.

On Fri, Aug 14, 2015 at 4:26 PM, Juan Pablo Genovese <
juanpgenovese(a)gmail.com> wrote:

Vineet,

there is some proposals to add better notifications to CF in general and
the CC in particular, but for now you can poll the CC API to get those
events. See http://apidocs.cloudfoundry.org/214/

Thanks!

2015-08-14 18:31 GMT-03:00 Vineet Banga <vineetbanga1(a)gmail.com>:

Is there any notification pub/sub mechanism in cloud foundry when
services are created/updated/deleted. We are exposing few services in CF
using service brokers and we would like some common actions to occur when
our services are created/delete/updated.


--
Mis mejores deseos,
Best wishes,
Meilleurs vœux,

Juan Pablo
------------------------------------------------------
http://www.jpgenovese.com


Re: no more stdout in app files since upgrade to 214

CF Runtime
 

Hi,

What version of the cf cli are you using? There was an update for logging
in cf v208 that changed the endpoint.

Thanks,
Joseph & Dan, OSS Release Integration Team

On Mon, Aug 17, 2015 at 7:26 AM, ramonskie <ramon.makkelie(a)klm.com> wrote:

- cf logs APPNAME --recent does not show any difference

- i created my deployment manually because spiff templates do not work with
openstack without neutron
so i i update my deployment manifest with the template changes that are
reported in the release notes
and i check it with the templates in teh templates folder of cf-release

- only one availability zone

- 1 loggregator_traffic
2 doppler instances

see manifest

---
<%
director_uuid = "1a14da86-ea9b-4e56-9831-362151952889"
protocol = "http"
cf_release = "214"
cf_services_release = "0.3-dev"
ip_address = "172.21.42.135"
cc_api_url = "http://api.cf.eden.klm.com"
common_password = "SECRET"
root_domain = "cf.eden.com"
deployment_name = "cloudfoundry"
ip_mysql_node = "172.21.42.137"
%>
name: <%= deployment_name %>
director_uuid: <%= director_uuid %>

releases:
- name: cf
version: <%= cf_release %>
- name: cf-services
version: <%= cf_services_release %>
- name: cf-services-contrib
version: 4.1-dev
- name: cf-mysql
version: 6

compilation:
workers: 4
network: default
reuse_compilation_vms: true
cloud_properties:
instance_type: m1.small

update:
canaries: 1
canary_watch_time: 30000-300000
update_watch_time: 30000-300000
max_in_flight: 2

networks:
- name: floating
type: vip
cloud_properties:
security_groups:
- open
- name: default
type: dynamic
cloud_properties:
security_groups:
- open

resource_pools:
- name: small
network: default
size: 6
stemcell:
name: bosh-openstack-kvm-ubuntu-trusty-go_agent
version: 3031
cloud_properties:
instance_type: m1.small

- name: medium
network: default
size: 8
stemcell:
name: bosh-openstack-kvm-ubuntu-trusty-go_agent
version: 3031
cloud_properties:
instance_type: m1.medium

- name: dea
network: default
size: 9
stemcell:
name: bosh-openstack-kvm-ubuntu-trusty-go_agent
version: 3031
cloud_properties:
instance_type: dea
availability_zone: z210

- name: services
network: default
size: 2
stemcell:
name: bosh-openstack-kvm-ubuntu-lucid-go_agent
version: 2641
cloud_properties:
instance_type: m1.small
- name: services-contrib
network: default
size: 4
stemcell:
name: bosh-openstack-kvm-ubuntu-lucid-go_agent
version: 2641
cloud_properties:
instance_type: m1.small

jobs:
- name: ha_proxy_z1
release: cf
templates:
- name: haproxy
- name: metron_agent
instances: 1
resource_pool: medium
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips: <%= ip_address %>
properties:
ha_proxy:
ssl_pem: false
disable_http: false
networks:
z1:
apps: default
management: default
router:
port: 80
servers:
z1:
- 172.21.42.190
z2:
- 172.21.42.157
metron_agent:
zone: z1

- name: database
templates:
- name: postgres
- name: metron_agent
release: cf
instances: 1
resource_pool: small
persistent_disk: 32786
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips:
- 172.21.42.138
properties:
db: databases
metron_agent:
zone: z1

- name: common2
templates:
- name: nats
- name: uaa
- name: collector
- name: metron_agent
release: cf
instances: 1
resource_pool: medium
persistent_disk: 16384
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips:
- 172.21.42.136
properties:
metron_agent:
zone: z1

- name: dea
templates:
- name: dea_next
- name: dea_logging_agent
- name: metron_agent
release: cf
instances: 9
resource_pool: dea
networks:
- name: default
default: [dns, gateway]
properties:
dea_next:
zone: z1
metron_agent:
zone: z1
networks:
apps: default
update:
max_in_flight: 1

- name: etcd_z1
release: cf
templates:
- name: etcd
release: cf
- name: etcd_metrics_server
release: cf
- name: metron_agent
release: cf
instances: 1
persistent_disk: 10024
resource_pool: small
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips:
- 172.21.42.189
properties:
metron_agent:
zone: z1

- name: hm9000_z1
release: cf
templates:
- name: hm9000
- name: metron_agent
instances: 1
resource_pool: small
networks:
- name: default
default: [dns, gateway]
properties:
etcd_ips:
- 172.21.42.189
metron_agent:
zone: z1

- name: controller_z1
templates:
- name: cloud_controller_ng
- name: metron_agent
release: cf
instances: 1
resource_pool: medium
networks:
- name: default
default: [dns, gateway]
properties:
metron_agent:
zone: z1

- name: controller_z2
templates:
- name: cloud_controller_ng
- name: metron_agent
release: cf
instances: 1
resource_pool: medium
networks:
- name: default
default: [dns, gateway]
properties:
metron_agent:
zone: z1

- name: controller_worker
templates:
- name: cloud_controller_worker
- name: metron_agent
release: cf
instances: 1
resource_pool: small
networks:
- name: default
default: [dns, gateway]
properties:
metron_agent:
zone: z1

- name: controller_clock
templates:
- name: cloud_controller_clock
- name: metron_agent
release: cf
instances: 1
resource_pool: small
networks:
- name: default
default: [dns, gateway]
properties:
metron_agent:
zone: z1

- name: doppler
templates:
- name: doppler
- name: syslog_drain_binder
- name: metron_agent
release: cf
instances: 2 # Scale out as necessary
resource_pool: medium
networks:
- name: default
default: [dns, gateway]
properties:
doppler:
zone: z1
metron_agent:
zone: z1
networks:
apps: default

- name: loggregator-trafficecontroller
templates:
- name: loggregator_trafficcontroller
- name: metron_agent
release: cf
instances: 1 # Scale out as necessary
resource_pool: small
networks:
- name: default
default: [dns, gateway]
properties:
traffic_controller:
zone: z1
metron_agent:
zone: z1
networks:
apps: default

- name: router_z1
templates:
- name: gorouter
- name: metron_agent
release: cf
instances: 1
resource_pool: medium
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips: 172.21.42.190
properties:
metron_agent:
zone: z1

- name: router_z2
templates:
- name: gorouter
- name: metron_agent
release: cf
instances: 1
resource_pool: medium
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips: 172.21.42.157
properties:
metron_agent:
zone: z1

- name: mysql
release: cf-mysql
template: mysql
instances: 1
resource_pool: services
persistent_disk: 16384
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips:
- <%= ip_mysql_node %>
properties:
admin_password: <%= common_password %>
max_connections: 1500
max_user_connections: 40

- name: cf-mysql-broker
release: cf-mysql
template: cf-mysql-broker
instances: 1
resource_pool: services
networks:
- name: default
default: [dns, gateway]
properties:
max_user_connections_default: ~
auth_username: services
auth_password: <%= common_password %>
cc_api_url: <%= cc_api_url %>
mysql_node:
host: <%= ip_mysql_node %>
admin_password: <%= common_password %>
services:
- name: p-mysql
id: 44b26033-1f54-4087-b7bc-da9652c2a539
description: MySQL service for application development and testing
tags:
- mysql
- relational
max_db_per_node: 250
metadata:
displayName: "Pivotal MySQL Dev"
imageUrl:
longDescription: "A MySQL relational database service for
development and testing. The MySQL server is multi-tenant and is not
replicated."
providerDisplayName: "Pivotal Software"
documentationUrl:
supportUrl: "http://support.cloudfoundry.com/"
plans:
- name: default
id: ab08f1bc-e6fc-4b56-a767-ee0fea6e3f20
description: Shared MySQL Server, 50MB persistent disk, 40 max
concurrent connections
max_storage_mb: 50
metadata:
costs:
- amount:
usd: 0.0
unit: MONTHLY

bullets:
- Shared MySQL server
- 50 MB storage
- 40 concurrent connections

- name: service_gateways
release: cf-services-contrib
templates:
- name: postgresql_gateway_ng
- name: mongodb_gateway
- name: rabbit_gateway
instances: 1
resource_pool: services-contrib
networks:
- name: default
default: [dns, gateway]
properties:
uaa_client_id: cf
uaa_endpoint: http://uaa.<%= root_domain %>
uaa_client_auth_credentials:
username: services
password: <%= common_password %>

- name: postgresql_service_node
release: cf-services-contrib
template: postgresql_node_ng
instances: 1
resource_pool: services-contrib
networks:
- name: default
default: [dns, gateway]
persistent_disk: 10000
properties:
postgresql_node:
db_connect_timeout: 30
plan: default

- name: mongodb_node
template: mongodb_node_ng
release: cf-services-contrib
instances: 1
resource_pool: services-contrib
persistent_disk: 16384
networks:
- name: default
default: [dns, gateway]
properties:
plan: default
uaa_client_id: cf
uaa_endpoint: http://uaa.<%= root_domain %>
uaa_client_auth_credentials:
username: services
password: <%= common_password %>
service_auth_tokens:
mongodb_core: c1oudc0wc1oudc0w

- name: rabbit_node
template: rabbit_node_ng
release: cf-services-contrib
instances: 1
resource_pool: services-contrib
persistent_disk: 16384
networks:
- name: default
default: [dns, gateway]
properties:
plan: default
uaa_client_id: cf
uaa_endpoint: http://uaa.<%= root_domain %>
uaa_client_auth_credentials:
username: services
password: <%= common_password %>

properties:
domain: <%= root_domain %>
system_domain: <%= root_domain %>
system_domain_organization: "cf.eden.klm.com"
app_domains:
- <%= root_domain %>
support_address:
https://www9.klm.com/corporate/pse/confluence/display/EDN/Cloud+Foundry+v2
description: "Cloud Foundry v2 support by the KLM CCWD eDEn team"

ssl:
skip_cert_verify: true

hm9000:
url: <%= protocol %>://hm9000.<%= root_domain %>

networks:
apps: default
management: default

dropsonde:
enabled: true

loggregator:
maxRetainedLogMessages: 100
debug: false
blacklisted_syslog_ranges: ~

doppler:
maxRetainedLogMessages: 100
debug: false
blacklisted_syslog_ranges: ~
unmarshaller_count: 5
port: 4443

loggregator_endpoint:
shared_secret: ilovesecrets

logger_endpoint:
use_ssl: false
port: 80

doppler_endpoint:
shared_secret: ilovesecrets

metron_endpoint:
shared_secret: ilovesecrets

metron_agent:
deployment: openstack

nats:
machines:
- 172.21.42.136
address: 172.21.42.136
port: 4222
user: nats
password: <%= common_password %>
authorization_timeout: 10
use_gnatsd: true

etcd:
machines:
- 172.21.42.189

etcd_ips:
- 172.21.42.189

etcd_metrics_server:
nats:
machines:
- 172.21.42.136
username: nats
password: <%= common_password %>

router:
status:
port: 8080
user: gorouter
password: <%= common_password %>

dea: &dea
memory_mb: 6144
disk_mb: 26384
directory_server_protocol: <%= protocol %>
memory_overcommit_factor: 3
disk_overcommit_factor: 1
default_health_check_timeout: 60
advertise_interval_in_seconds: 5
heartbeat_interval_in_seconds: 10
allow_host_access: true

dea_next: *dea

databases: &databases
db_scheme: postgres
address: 172.21.42.138
port: 5524
roles:
- tag: admin
name: ccadmin
password: <%= common_password %>
- tag: admin
name: uaaadmin
password: <%= common_password %>
databases:
- tag: cc
name: ccdb
citext: true
- tag: uaa
name: uaadb
citext: true

ccdb:
address: 172.21.42.138
databases:
- name: ccdb
tag: cc
db_scheme: postgres
port: 5524
roles:
- name: ccadmin
tag: admin
password: <%= common_password %>

uaadb:
db_scheme: postgresql
address: 172.21.42.138
port: 5524
roles:
- tag: admin
name: uaaadmin
password: <%= common_password %>
databases:
- tag: uaa
name: uaadb
citext: true

serialization_data_server:
port: 8080
logging_level: debug
upload_token: 8f7COGvThwlmulIzAgOHxMXurBrG364k
upload_timeout: 10

collector:
deployment_name: cloudfoundry
use_tsdb: false
use_aws_cloudwatch: false
use_datadog: true
datadog:
api_key: 5beac882c56fc547f0f960e1080b699e
application_key: cloudfoundry

service_lifecycle:
serialization_data_server:
- 172.21.42.136

service_plans:
postgresql:
default:
unique_id: 'ef3d543c-0a1f-4db4-ad80-cc2a2f144e17'
description: "Shared server, shared VM, 1MB memory, 10MB storage,
10
connections"
free: true
job_management:
high_water: 1400
low_water: 100
configuration:
lifecycle:
enable: false
warden:
enable: false
mongodb:
default:
unique_id: "2a42b2de-507a-4775-a23f-dd5303ed5903"
description: "Developer, shared VM, 250MB storage, 20 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 20
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1

rabbit:
default:
unique_id: "56868d0c-00ec-4658-b757-28a6beb03ce0"
description: "Developer, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
bandwidth_quotas:
per_second: 1
per_day: 10
time_window: 86400
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1

postgresql_gateway:
token: c1oudc0wc1oudc0w
default_plan: default
service_timeout: 30
node_timeout: 30
supported_versions: ["9.3"]
version_aliases:
current: "9.3"
cc_api_version: v2
postgresql_node:
supported_versions: ["9.3"]
default_version: "9.3"
max_tmp: 900
password: c1oudc0wc1oudc0w

mongodb_gateway:
token: c1oudc0wc1oudc0w
default_plan: default
supported_versions: ["2.2"]
version_aliases:
current: "2.2"
cc_api_version: v2
mongodb_node:
supported_versions: ["2.2"]
default_version: "2.2"
max_tmp: 900
m_actions: ["restart"] #restarts crashed Mongodb databases

rabbit_gateway:
token: c1oudc0wc1oudc0w
default_plan: "default"
supported_versions: ["3.0"]
version_aliases:
current: "3.0"
cc_api_version: v2
rabbit_node:
supported_versions: ["3.0"]
default_version: "3.0"
max_tmp: 900
m_actions: ["restart"] #restarts crashed rabbitMQ nodes

cc_api_version: v2

cc: &cc
allow_app_ssh_access: false
logging_level: debug2
db_logging_level: debug2
srv_api_uri: <%= protocol %>://api.<%= root_domain %>
cc_partition: default
db_encryption_key: "b963127302433579"
bootstrap_admin_email: "GRPAA927(a)klm.com"
bulk_api_password: <%= common_password %>
internal_api_user: "internal_user"
internal_api_password: <%= common_password %>
uaa_resource_id: cloud_controller
staging_upload_user: uploaduser
staging_upload_password: <%= common_password %>
resource_pool:
resource_directory_key: <%= root_domain %>-cc-resources-new
fog_connection:
provider: "AWS"
host: "s3.eden.klm.com"
scheme: "http"
port: 80
aws_signature_version: "2"
aws_access_key_id: "SECRET"
aws_secret_access_key: "SECRET"
packages:
app_package_directory_key: <%= root_domain %>-cc-packages-new
fog_connection:
provider: "AWS"
host: "s3.eden.com"
scheme: "http"
port: 80
aws_signature_version: "2"
aws_access_key_id: "SECRET"
aws_secret_access_key: "SECRET"
droplets:
droplet_directory_key: <%= root_domain %>-cc-droplets-new
fog_connection:
provider: "AWS"
host: "s3.eden.com"
scheme: "http"
port: 80
aws_signature_version: "2"
aws_access_key_id: "SECRET"
aws_secret_access_key: "SECRET"
buildpacks:
buildpack_directory_key: <%= root_domain %>-cc-buildpacks-new
fog_connection:
provider: "AWS"
host: "s3.eden.com"
scheme: "http"
port: 80
aws_signature_version: "2"
aws_access_key_id: "SECRET"
aws_secret_access_key: "SECRET"
quota_definitions:
free:
non_basic_services_allowed: true
total_services: 4
total_routes: 1000
memory_limit: 8192
paid:
non_basic_services_allowed: true
total_services: 32
total_routes: 1000
memory_limit: 204800
runaway:
non_basic_services_allowed: true
total_services: 500
total_routes: 1000
memory_limit: 204800
trial:
non_basic_services_allowed: false
total_services: 10
total_routes: 1000
memory_limit: 2048
trial_db_allowed: true
default_quota_definition: free
newrelic:
license_key: 21ac88fa53748a364b20697645321a81853a0251
hm9000_noop: false
system_buildpacks:
- name: staticfile_buildpack
package: buildpack_staticfile
- name: java_buildpack
package: buildpack_java
- name: ruby_buildpack
package: buildpack_ruby
- name: nodejs_buildpack
package: buildpack_nodejs
- name: go_buildpack
package: buildpack_go
- name: python_buildpack
package: buildpack_python
- name: php_buildpack
package: buildpack_php
- name: binary_buildpack
package: buildpack_binary
default_buildpacks:
- name: staticfile_buildpack
package: buildpack_staticfile
- name: java_buildpack
package: buildpack_java
- name: ruby_buildpack
package: buildpack_ruby
- name: nodejs_buildpack
package: buildpack_nodejs
- name: go_buildpack
package: buildpack_go
- name: python_buildpack
package: buildpack_python
- name: php_buildpack
package: buildpack_php
- name: binary_buildpack
package: buildpack_binary
install_buildpacks:
- name: staticfile_buildpack
package: buildpack_staticfile
- name: java_buildpack
package: buildpack_java
- name: ruby_buildpack
package: buildpack_ruby
- name: nodejs_buildpack
package: buildpack_nodejs
- name: go_buildpack
package: buildpack_go
- name: python_buildpack
package: buildpack_python
- name: php_buildpack
package: buildpack_php
- name: binary_buildpack
package: buildpack_binary
security_group_definitions:
- name: public_networks
rules:
- protocol: all
destination: 0.0.0.0-9.255.255.255
- protocol: all
destination: 11.0.0.0-169.253.255.255
- protocol: all
destination: 169.255.0.0-172.15.255.255
- protocol: all
destination: 172.32.0.0-192.167.255.255
- protocol: all
destination: 192.169.0.0-255.255.255.255
- protocol: all
destination: 171.0.0.0-171.255.255.255
- protocol: all
destination: 172.0.0.0-175.255.255.255
- protocol: all
destination: 10.0.0.0-11.255.255.255
- name: dns
rules:
- protocol: tcp
destination: 0.0.0.0/0
ports: '53'
- protocol: udp
destination: 0.0.0.0/0
ports: '53'
default_running_security_groups: ["public_networks", "dns"]
default_staging_security_groups: ["public_networks", "dns"]

ccng: *cc

login:
protocol: http
links:
home: http://console.<%= root_domain %>
passwd: http://console.<%= root_domain %>/password_resets/new
signup: http://console.<%= root_domain %>/register

uaa:
url: <%= protocol %>://uaa.<%= root_domain %>
spring_profiles: postgresql
no_ssl: <%= protocol == "http" %>
catalina_opts: -Xmx768m -XX:MaxPermSize=256m
resource_id: account_manager
jwt:
signing_key: |
-----BEGIN RSA PRIVATE KEY-----
MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1
JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6
-----END RSA PRIVATE KEY-----
verification_key: |
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d
-----END PUBLIC KEY-----
cc:
client_secret: <%= common_password %>
admin:
client_secret: <%= common_password %>
batch:
username: batchuser
password: <%= common_password %>
client:
autoapprove:
- cf
- vmc
- my
- micro
- support-signon
- login
- styx
clients:
login:
override: true
scope: openid
authorities: oauth.login
secret: c1oudc0w
authorized-grant-types:
authorization_code,client_credentials,refresh_token
redirect-uri: http://login.<%= root_domain %>
notifications:
secret: <%= common_password %>
authorities: cloud_controller.admin,scim.read
authorized-grant-types: client_credentials
servicesmgmt:
override: true
secret: serivcesmgmtsecret
scope: openid,cloud_controller.read,cloud_controller.write
authorities:
uaa.resource,oauth.service,clients.read,clients.write,clients.secret
authorized-grant-types:
authorization_code,client_credentials,password,implicit
redirect-uri: http://servicesmgmt.<%= root_domain
%>/auth/cloudfoundry/callback
autoapprove: true
doppler:
override: true
authorities: uaa.resource
secret: ilovesecrets
cloud_controller_username_lookup:
authorities: scim.userids
secret: <%= common_password %>
gorouter:
authorities:
clients.read,clients.write,clients.admin,route.admin,route.advertise
authorized-grant-types: client_credentials,refresh_token
scope: openid,cloud_controller_service_permissions.read
secret: <%= common_password %>
scim:
userids_enabled: true
users:
- admin|<%= common_password

%>|scim.write,scim.read,openid,cloud_controller.admin,uaa.admin,password.write
- services|<%= common_password
%>|scim.write,scim.read,openid,cloud_controller.admin
-
cloudfoundry|SECRET|scim.write,scim.read,openid,cloud_controller.admin





--
View this message in context:
http://cf-dev.70369.x6.nabble.com/no-more-stdout-in-app-files-since-upgrade-to-214-tp1197p1231.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: I can't connect to my CF API from an application deployed

Amit Kumar Gupta
 

Hi Juan,

Is this question a duplicate of the other one?

Thanks,
Amit

On Mon, Aug 10, 2015 at 5:56 AM, Juan Antonio Breña Moral <
bren(a)juanantonio.info> wrote:

Hi,

This morning, I deployed a easy Node.js application using Express.
This application tries to connect with my api:

https://api.MY_IP.xip.io

but when I try to execute, the system returns an error:

{"code":"ECONNREFUSED","errno":"ECONNREFUSED","syscall":"connect"}

When I test the project from localhost to the remote host with CF, the
code runs nice, but If I execute the code deployed in CF, I receive this
error. Does exist a CF variable similar to: process.env.VCAP_APP_PORT to
avoid this problem?

Juan Antonio


Re: How to call Cloud Foundry API from a node.js application deployed?

Amit Kumar Gupta
 

You may need to configure the application-level security groups to allow
apps running in containers to talk to system components such as the Cloud
Controllers. You will likely only want to allow this for trusted
applications, and thus may wish to bind these more permissive security
group settings to the particular space in which your Node app resides. You
can read more about security groups here:

http://docs.pivotal.io/pivotalcf/adminguide/app-sec-groups.html

On Wed, Aug 12, 2015 at 2:22 AM, Juan Antonio Breña Moral <
bren(a)juanantonio.info> wrote:

Good morning,

Yes I tried but the connection failed:


{"error":{"code":"ECONNREFUSED","errno":"ECONNREFUSED","syscall":"connect"}}

Later, I tryed with curl in the same host where I installed CF and I had
success:

curl "https://api.MY_PUBLIC_IP.xip.io/v2/info" -X GET -k

{"name":"vcap","build":"2222","support":"http://support.cloudfoundry.com","version":2,"description":"Cloud
Foundry sponsored by Pivotal","authorization_endpoint":"
https://uaa.MY_PUBLIC_IP.xip.io","token_endpoint":"
https://uaa.MY_PUBLIC_IP.xip.io
","min_cli_version":null,"min_recommended_cli_version":null,"api_version":"2.25.0","logging_endpoint":"wss://
loggregator.MY_PUBLIC_IP.xip.io:4443"}

Why from CURL, I can't connect and from the application is not possible?
I have to open some port in CF?

Juan Antonio


Re: cannot access director, trying 4 more times...

Amit Kumar Gupta
 

The directory is on your computer. You probably got it by cloning the
repository from github, so only you know where you cloned it. If you are
using a unix machine, you can use the "find" utility to find directories.

On Sat, Aug 8, 2015 at 12:50 PM, Qing Gong <qinggong(a)gmail.com> wrote:

How to find out where bosh-lite directory is? I tried "which bosh" and it
shows:
/install/users/cfg/.rubies/ruby-2.1.3/bin/bosh

But that's not a directory.
Thanks.


Re: CF integration with logger and monitoring tools

Swatz bosh
 

Thanks for your reply Daniel.

So you are saying that - for any agent based monitoring tools like - NewRelic, AppDynamics, Wily-Introscope, Dyna-trace etc. firehose approach doesn't make any sense, and it has to be implemented only through user-defined service and which is binded to apps ?? and there is no way to to pull logs and metrics of entire CF instance into such agent based monitoring tools, it has to be binded everytime for each app ?


Update on Mailman 3 launch

Eric Searcy <eric@...>
 

Thank you for your patience as we continue to pilot Mailman 3 with the Cloud Foundry project. We do appreciate any feedback (and bug reports!) you have from using the system, which can be sent to helpdesk(a)cloudfoundry.org.

We have an open upstream bug about the issue that is causing delays to mail (including messages you post via the web interface sometimes not showing up). I hope this will have a proper resolution by the end of the week, and in the mean time we will be monitoring for this and “unsticking” any messages that get stuck: so if you don’t see any error, then your message *will* get posted.
(https://gitlab.com/mailman/mailman/issues/138)

We have been working with the Mailman maintainers on improvements since before we launched Cloud Foundry on Mailman 3, as well as to fix the problems we’ve found since then. The following is a list of issues we’ve been able to address for the Cloud Foundry environment, so in case you have seen any of the following, they should be fixed very soon if not already:

* HTTP “500” error when authenticated users try to post on web to list they are not subscribed to:
https://gitlab.com/mailman/hyperkitty/merge_requests/2
* Terrible performance with large list memberships:
https://gitlab.com/mailman/postorius/issues/25
* Admins cannot delete lists [postgres]:
https://gitlab.com/mailman/mailman/issues/115
* Add web links to forum in footers (not reported by us, but we sponsored the fix):
https://gitlab.com/mailman/mailman/issues/61
* Bug logging in with an email that was added as an alternate:
https://gitlab.com/mailman/postorius/issues/27
* Bug adding an alternate email to your account that already is in DB:
https://gitlab.com/mailman/postorius/merge_requests/5
https://gitlab.com/mailman/mailman/merge_requests/30
* Blurry gravatar icons:
https://gitlab.com/mailman/hyperkitty/issues/8

We’re also fixing the problem with stale search indexes and a few UI improvements, both for archive browsing and moderator queues.

Let us know if you see anything amiss!

--
Eric Searcy, Infrastructure Manager
The Linux Foundation


Re: Overcommit on Diego Cells

Eric Malm <emalm@...>
 

Hi, Mike,

Apologies, I emailed this to cf-dev a few days ago, but it seems not to have gone through. Anyway, thanks for asking about the different configuration values Diego exposes for disk and memory. Yes, you can use the 'diego.executor.memory_capacity_mb' and 'diego.executor.disk_capacity_mb' properties to specify overcommits in absolute terms rather than the relative factors configurable on the DEAs. The cell reps will advertise those values as their maximum memory and disk capacity, and subtract memory and disk for allocated containers when reporting their available capacity during auctions.

The 'btrfs_store_size_mb' property on garden-linux is more of a moving target as garden-linux settles in on that filesystem as a backing store. As of garden-linux-release 0.292.0, which diego-release 0.1412.0 and later consume, that property accepts a '-1' value that allows it to grow up to the full size of the available disk on the /var/vcap/data ephemeral disk volume. The btrfs volume itself is sparse, so it will start at effectively zero size and grow as needed to accommodate the container layers. Since you're already monitoring disk usage on your VMs carefully and scaling out when you hit certain limits, this might be a good option for you. This is also effectively how the DEAs operate today, without an explicit limit on the total amount of disk they allocate for containers.

If you do want more certainty in the maximum size that the garden-linux btrfs volume will grow to, or if you're on a version of diego-release earlier than 0.1412.0, you should set btrfs_store_size_mb to a positive value, and garden-linux will create the volume to grow only up to that size. One strategy to determine that value would be to use the maximum size of the ephemeral disk, less the size of the BOSH-deployed packages (for the executor, currently around 1.3 GB, including the untarred cflinuxfs2 rootfs), less the size allocated to the executor cache in the 'diego.executor.max_cache_size_in_bytes' property (which currently defaults to 10GB).

Best,
Eric


Re: Hard-coded domain name in diego etcd job

Eric Malm <emalm@...>
 

Hi, Maggie,

Apologies, I sent this reply to cf-dev earlier, but it seems not to have gone through. Anyway, the 'cf.internal' domain is used internally by CF and Diego components to do service discovery via consul DNS. You shouldn't need to change it, but for Diego to operate correctly you do need to have a consul cluster present in your CF deployment.

Also, I see you're attempting to deploy Diego 0.1402.0 against CF v210, but those versions are not interoperable. As mentioned in the CF release notes[1], we recommend you deploy Diego version 0.1247.0 against CF v210, or that you upgrade to CF v214 and deploy the recommended Diego version 0.1398.0 alongside it. In particular, that internal domain changed from 'consul' to 'cf.internal' after CF v213/Diego v0.1353.0, so there's no way Diego 0.1402.0 will work with CF v210.

Thanks,
Eric, CF Runtime Diego PM

[1]: https://github.com/cloudfoundry/cf-release/releases


Re: Hard-coded domain name in diego etcd job

Amit Kumar Gupta
 

You should not change anything in your DNS servers. It is purely internal;
jobs that need to reach other services over the internal domain should be
colocated with consul_agents which will serve those DNS requests. It's all
self-contained.

On Mon, Aug 17, 2015 at 7:47 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote:

Then how should I config my DNS server? Which host/job’s IP should be
mapped to domain “service.cf.internal”?



I also attached the etcd job log. Would you please help to take a look?



Thanks,

Maggie



*From:* Gwenn Etourneau [mailto:getourneau(a)pivotal.io]
*Sent:* 2015年8月17日 18:51
*To:* Discussions about Cloud Foundry projects and the system overall.
*Subject:* [cf-dev] Re: Hard-coded domain name in diego etcd job



You not should change it, this domain is use only with consul as DNS.

Many component rely on it, uaa and so on.




https://github.com/cloudfoundry/cf-release/blob/90d730a2d13d9e065a7f348e7fd31a1522074d02/jobs/consul_agent/templates/config.json.erb



Do you have some logs ?







On Mon, Aug 17, 2015 at 7:41 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:

Hi,



I am trying to deploy diego 0.1402.0 into vShpere server to work with CF
210. However the deployment failed when creating job ‘etcd’ with following
error.



*Error: cannot sync with the cluster using endpoints
https://database-z1-0.etcd.service.cf.internal:4001
<https://database-z1-0.etcd.service.cf.internal:4001>*



I tried to change the domain name to my own domain name in diego yml file.
But it didn’t work. I found the domain name was hard-coded in
etcd_bosh_utils.sh.




https://github.com/cloudfoundry-incubator/diego-release/blob/develop/jobs/etcd/templates/etcd_bosh_utils.sh.erb



Could anyone tell me how to work around it?



Thanks,

Maggie



Re: Hard-coded domain name in diego etcd job

MaggieMeng
 

Then how should I config my DNS server? Which host/job’s IP should be mapped to domain “service.cf.internal”?

I also attached the etcd job log. Would you please help to take a look?

Thanks,
Maggie

From: Gwenn Etourneau [mailto:getourneau(a)pivotal.io]
Sent: 2015年8月17日 18:51
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Hard-coded domain name in diego etcd job

You not should change it, this domain is use only with consul as DNS.
Many component rely on it, uaa and so on.

https://github.com/cloudfoundry/cf-release/blob/90d730a2d13d9e065a7f348e7fd31a1522074d02/jobs/consul_agent/templates/config.json.erb

Do you have some logs ?



On Mon, Aug 17, 2015 at 7:41 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com<mailto:xiangyi.meng(a)emc.com>> wrote:
Hi,

I am trying to deploy diego 0.1402.0 into vShpere server to work with CF 210. However the deployment failed when creating job ‘etcd’ with following error.

Error: cannot sync with the cluster using endpoints https://database-z1-0.etcd.service.cf.internal:4001

I tried to change the domain name to my own domain name in diego yml file. But it didn’t work. I found the domain name was hard-coded in etcd_bosh_utils.sh.

https://github.com/cloudfoundry-incubator/diego-release/blob/develop/jobs/etcd/templates/etcd_bosh_utils.sh.erb

Could anyone tell me how to work around it?

Thanks,
Maggie


Re: Request timeout in CloudFoundry

Ronak Banka
 

Hello,

you can add the timeout in your manifest itself.
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest.html#timeout

Thanks
Ronak



--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Request-timeout-in-CloudFoundry-tp1232p1233.html
Sent from the CF Dev mailing list archive at Nabble.com.


Request timeout in CloudFoundry

Flávio Henrique Schuindt da Silva <flavio.schuindt at gmail.com...>
 

Hi guys,

How can I increase the request timeout in CF? I would like to give more
time to my app to send a response to the requests before getting a timeout.
Should I increase it in the buildpack? If yes, where should I change it on
java buildpack?

Thanks in advance!


Request timeout in CloudFoundry

Flávio Henrique Schuindt da Silva <flavio.schuindt at gmail.com...>
 

Hi guys,

How can I increase the request timeout in CF? I would like to give more time to my app to send a response to the requests before getting a timeout. Should I increase it in the buildpack? If yes, where should I change it on java buildpack?

Thanks in advance!


Request timeout in CloudFoundry

Flávio Henrique Schuindt da Silva <flavio.schuindt at gmail.com...>
 

Hi guys,

How can I increase the request timeout in CF? I would like to give more time to my app to send a response to the requests before getting a timeout. Should I increase it in the buildpack? If yes, where should I change it on java buildpack?

Thanks in advance!


Re: no more stdout in app files since upgrade to 214

ramonskie
 

- cf logs APPNAME --recent does not show any difference

- i created my deployment manually because spiff templates do not work with
openstack without neutron
so i i update my deployment manifest with the template changes that are
reported in the release notes
and i check it with the templates in teh templates folder of cf-release

- only one availability zone

- 1 loggregator_traffic
2 doppler instances

see manifest

---
<%
director_uuid = "1a14da86-ea9b-4e56-9831-362151952889"
protocol = "http"
cf_release = "214"
cf_services_release = "0.3-dev"
ip_address = "172.21.42.135"
cc_api_url = "http://api.cf.eden.klm.com"
common_password = "SECRET"
root_domain = "cf.eden.com"
deployment_name = "cloudfoundry"
ip_mysql_node = "172.21.42.137"
%>
name: <%= deployment_name %>
director_uuid: <%= director_uuid %>

releases:
- name: cf
version: <%= cf_release %>
- name: cf-services
version: <%= cf_services_release %>
- name: cf-services-contrib
version: 4.1-dev
- name: cf-mysql
version: 6

compilation:
workers: 4
network: default
reuse_compilation_vms: true
cloud_properties:
instance_type: m1.small

update:
canaries: 1
canary_watch_time: 30000-300000
update_watch_time: 30000-300000
max_in_flight: 2

networks:
- name: floating
type: vip
cloud_properties:
security_groups:
- open
- name: default
type: dynamic
cloud_properties:
security_groups:
- open

resource_pools:
- name: small
network: default
size: 6
stemcell:
name: bosh-openstack-kvm-ubuntu-trusty-go_agent
version: 3031
cloud_properties:
instance_type: m1.small

- name: medium
network: default
size: 8
stemcell:
name: bosh-openstack-kvm-ubuntu-trusty-go_agent
version: 3031
cloud_properties:
instance_type: m1.medium

- name: dea
network: default
size: 9
stemcell:
name: bosh-openstack-kvm-ubuntu-trusty-go_agent
version: 3031
cloud_properties:
instance_type: dea
availability_zone: z210

- name: services
network: default
size: 2
stemcell:
name: bosh-openstack-kvm-ubuntu-lucid-go_agent
version: 2641
cloud_properties:
instance_type: m1.small
- name: services-contrib
network: default
size: 4
stemcell:
name: bosh-openstack-kvm-ubuntu-lucid-go_agent
version: 2641
cloud_properties:
instance_type: m1.small

jobs:
- name: ha_proxy_z1
release: cf
templates:
- name: haproxy
- name: metron_agent
instances: 1
resource_pool: medium
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips: <%= ip_address %>
properties:
ha_proxy:
ssl_pem: false
disable_http: false
networks:
z1:
apps: default
management: default
router:
port: 80
servers:
z1:
- 172.21.42.190
z2:
- 172.21.42.157
metron_agent:
zone: z1

- name: database
templates:
- name: postgres
- name: metron_agent
release: cf
instances: 1
resource_pool: small
persistent_disk: 32786
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips:
- 172.21.42.138
properties:
db: databases
metron_agent:
zone: z1

- name: common2
templates:
- name: nats
- name: uaa
- name: collector
- name: metron_agent
release: cf
instances: 1
resource_pool: medium
persistent_disk: 16384
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips:
- 172.21.42.136
properties:
metron_agent:
zone: z1

- name: dea
templates:
- name: dea_next
- name: dea_logging_agent
- name: metron_agent
release: cf
instances: 9
resource_pool: dea
networks:
- name: default
default: [dns, gateway]
properties:
dea_next:
zone: z1
metron_agent:
zone: z1
networks:
apps: default
update:
max_in_flight: 1

- name: etcd_z1
release: cf
templates:
- name: etcd
release: cf
- name: etcd_metrics_server
release: cf
- name: metron_agent
release: cf
instances: 1
persistent_disk: 10024
resource_pool: small
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips:
- 172.21.42.189
properties:
metron_agent:
zone: z1

- name: hm9000_z1
release: cf
templates:
- name: hm9000
- name: metron_agent
instances: 1
resource_pool: small
networks:
- name: default
default: [dns, gateway]
properties:
etcd_ips:
- 172.21.42.189
metron_agent:
zone: z1

- name: controller_z1
templates:
- name: cloud_controller_ng
- name: metron_agent
release: cf
instances: 1
resource_pool: medium
networks:
- name: default
default: [dns, gateway]
properties:
metron_agent:
zone: z1

- name: controller_z2
templates:
- name: cloud_controller_ng
- name: metron_agent
release: cf
instances: 1
resource_pool: medium
networks:
- name: default
default: [dns, gateway]
properties:
metron_agent:
zone: z1

- name: controller_worker
templates:
- name: cloud_controller_worker
- name: metron_agent
release: cf
instances: 1
resource_pool: small
networks:
- name: default
default: [dns, gateway]
properties:
metron_agent:
zone: z1

- name: controller_clock
templates:
- name: cloud_controller_clock
- name: metron_agent
release: cf
instances: 1
resource_pool: small
networks:
- name: default
default: [dns, gateway]
properties:
metron_agent:
zone: z1

- name: doppler
templates:
- name: doppler
- name: syslog_drain_binder
- name: metron_agent
release: cf
instances: 2 # Scale out as necessary
resource_pool: medium
networks:
- name: default
default: [dns, gateway]
properties:
doppler:
zone: z1
metron_agent:
zone: z1
networks:
apps: default

- name: loggregator-trafficecontroller
templates:
- name: loggregator_trafficcontroller
- name: metron_agent
release: cf
instances: 1 # Scale out as necessary
resource_pool: small
networks:
- name: default
default: [dns, gateway]
properties:
traffic_controller:
zone: z1
metron_agent:
zone: z1
networks:
apps: default

- name: router_z1
templates:
- name: gorouter
- name: metron_agent
release: cf
instances: 1
resource_pool: medium
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips: 172.21.42.190
properties:
metron_agent:
zone: z1

- name: router_z2
templates:
- name: gorouter
- name: metron_agent
release: cf
instances: 1
resource_pool: medium
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips: 172.21.42.157
properties:
metron_agent:
zone: z1

- name: mysql
release: cf-mysql
template: mysql
instances: 1
resource_pool: services
persistent_disk: 16384
networks:
- name: default
default: [dns, gateway]
- name: floating
static_ips:
- <%= ip_mysql_node %>
properties:
admin_password: <%= common_password %>
max_connections: 1500
max_user_connections: 40

- name: cf-mysql-broker
release: cf-mysql
template: cf-mysql-broker
instances: 1
resource_pool: services
networks:
- name: default
default: [dns, gateway]
properties:
max_user_connections_default: ~
auth_username: services
auth_password: <%= common_password %>
cc_api_url: <%= cc_api_url %>
mysql_node:
host: <%= ip_mysql_node %>
admin_password: <%= common_password %>
services:
- name: p-mysql
id: 44b26033-1f54-4087-b7bc-da9652c2a539
description: MySQL service for application development and testing
tags:
- mysql
- relational
max_db_per_node: 250
metadata:
displayName: "Pivotal MySQL Dev"
imageUrl:
longDescription: "A MySQL relational database service for
development and testing. The MySQL server is multi-tenant and is not
replicated."
providerDisplayName: "Pivotal Software"
documentationUrl:
supportUrl: "http://support.cloudfoundry.com/"
plans:
- name: default
id: ab08f1bc-e6fc-4b56-a767-ee0fea6e3f20
description: Shared MySQL Server, 50MB persistent disk, 40 max
concurrent connections
max_storage_mb: 50
metadata:
costs:
- amount:
usd: 0.0
unit: MONTHLY

bullets:
- Shared MySQL server
- 50 MB storage
- 40 concurrent connections

- name: service_gateways
release: cf-services-contrib
templates:
- name: postgresql_gateway_ng
- name: mongodb_gateway
- name: rabbit_gateway
instances: 1
resource_pool: services-contrib
networks:
- name: default
default: [dns, gateway]
properties:
uaa_client_id: cf
uaa_endpoint: http://uaa.<%= root_domain %>
uaa_client_auth_credentials:
username: services
password: <%= common_password %>

- name: postgresql_service_node
release: cf-services-contrib
template: postgresql_node_ng
instances: 1
resource_pool: services-contrib
networks:
- name: default
default: [dns, gateway]
persistent_disk: 10000
properties:
postgresql_node:
db_connect_timeout: 30
plan: default

- name: mongodb_node
template: mongodb_node_ng
release: cf-services-contrib
instances: 1
resource_pool: services-contrib
persistent_disk: 16384
networks:
- name: default
default: [dns, gateway]
properties:
plan: default
uaa_client_id: cf
uaa_endpoint: http://uaa.<%= root_domain %>
uaa_client_auth_credentials:
username: services
password: <%= common_password %>
service_auth_tokens:
mongodb_core: c1oudc0wc1oudc0w

- name: rabbit_node
template: rabbit_node_ng
release: cf-services-contrib
instances: 1
resource_pool: services-contrib
persistent_disk: 16384
networks:
- name: default
default: [dns, gateway]
properties:
plan: default
uaa_client_id: cf
uaa_endpoint: http://uaa.<%= root_domain %>
uaa_client_auth_credentials:
username: services
password: <%= common_password %>

properties:
domain: <%= root_domain %>
system_domain: <%= root_domain %>
system_domain_organization: "cf.eden.klm.com"
app_domains:
- <%= root_domain %>
support_address:
https://www9.klm.com/corporate/pse/confluence/display/EDN/Cloud+Foundry+v2
description: "Cloud Foundry v2 support by the KLM CCWD eDEn team"

ssl:
skip_cert_verify: true

hm9000:
url: <%= protocol %>://hm9000.<%= root_domain %>

networks:
apps: default
management: default

dropsonde:
enabled: true

loggregator:
maxRetainedLogMessages: 100
debug: false
blacklisted_syslog_ranges: ~

doppler:
maxRetainedLogMessages: 100
debug: false
blacklisted_syslog_ranges: ~
unmarshaller_count: 5
port: 4443

loggregator_endpoint:
shared_secret: ilovesecrets

logger_endpoint:
use_ssl: false
port: 80

doppler_endpoint:
shared_secret: ilovesecrets

metron_endpoint:
shared_secret: ilovesecrets

metron_agent:
deployment: openstack

nats:
machines:
- 172.21.42.136
address: 172.21.42.136
port: 4222
user: nats
password: <%= common_password %>
authorization_timeout: 10
use_gnatsd: true

etcd:
machines:
- 172.21.42.189

etcd_ips:
- 172.21.42.189

etcd_metrics_server:
nats:
machines:
- 172.21.42.136
username: nats
password: <%= common_password %>

router:
status:
port: 8080
user: gorouter
password: <%= common_password %>

dea: &dea
memory_mb: 6144
disk_mb: 26384
directory_server_protocol: <%= protocol %>
memory_overcommit_factor: 3
disk_overcommit_factor: 1
default_health_check_timeout: 60
advertise_interval_in_seconds: 5
heartbeat_interval_in_seconds: 10
allow_host_access: true

dea_next: *dea

databases: &databases
db_scheme: postgres
address: 172.21.42.138
port: 5524
roles:
- tag: admin
name: ccadmin
password: <%= common_password %>
- tag: admin
name: uaaadmin
password: <%= common_password %>
databases:
- tag: cc
name: ccdb
citext: true
- tag: uaa
name: uaadb
citext: true

ccdb:
address: 172.21.42.138
databases:
- name: ccdb
tag: cc
db_scheme: postgres
port: 5524
roles:
- name: ccadmin
tag: admin
password: <%= common_password %>

uaadb:
db_scheme: postgresql
address: 172.21.42.138
port: 5524
roles:
- tag: admin
name: uaaadmin
password: <%= common_password %>
databases:
- tag: uaa
name: uaadb
citext: true

serialization_data_server:
port: 8080
logging_level: debug
upload_token: 8f7COGvThwlmulIzAgOHxMXurBrG364k
upload_timeout: 10

collector:
deployment_name: cloudfoundry
use_tsdb: false
use_aws_cloudwatch: false
use_datadog: true
datadog:
api_key: 5beac882c56fc547f0f960e1080b699e
application_key: cloudfoundry

service_lifecycle:
serialization_data_server:
- 172.21.42.136

service_plans:
postgresql:
default:
unique_id: 'ef3d543c-0a1f-4db4-ad80-cc2a2f144e17'
description: "Shared server, shared VM, 1MB memory, 10MB storage, 10
connections"
free: true
job_management:
high_water: 1400
low_water: 100
configuration:
lifecycle:
enable: false
warden:
enable: false
mongodb:
default:
unique_id: "2a42b2de-507a-4775-a23f-dd5303ed5903"
description: "Developer, shared VM, 250MB storage, 20 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 20
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1

rabbit:
default:
unique_id: "56868d0c-00ec-4658-b757-28a6beb03ce0"
description: "Developer, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
bandwidth_quotas:
per_second: 1
per_day: 10
time_window: 86400
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1

postgresql_gateway:
token: c1oudc0wc1oudc0w
default_plan: default
service_timeout: 30
node_timeout: 30
supported_versions: ["9.3"]
version_aliases:
current: "9.3"
cc_api_version: v2
postgresql_node:
supported_versions: ["9.3"]
default_version: "9.3"
max_tmp: 900
password: c1oudc0wc1oudc0w

mongodb_gateway:
token: c1oudc0wc1oudc0w
default_plan: default
supported_versions: ["2.2"]
version_aliases:
current: "2.2"
cc_api_version: v2
mongodb_node:
supported_versions: ["2.2"]
default_version: "2.2"
max_tmp: 900
m_actions: ["restart"] #restarts crashed Mongodb databases

rabbit_gateway:
token: c1oudc0wc1oudc0w
default_plan: "default"
supported_versions: ["3.0"]
version_aliases:
current: "3.0"
cc_api_version: v2
rabbit_node:
supported_versions: ["3.0"]
default_version: "3.0"
max_tmp: 900
m_actions: ["restart"] #restarts crashed rabbitMQ nodes

cc_api_version: v2

cc: &cc
allow_app_ssh_access: false
logging_level: debug2
db_logging_level: debug2
srv_api_uri: <%= protocol %>://api.<%= root_domain %>
cc_partition: default
db_encryption_key: "b963127302433579"
bootstrap_admin_email: "GRPAA927(a)klm.com"
bulk_api_password: <%= common_password %>
internal_api_user: "internal_user"
internal_api_password: <%= common_password %>
uaa_resource_id: cloud_controller
staging_upload_user: uploaduser
staging_upload_password: <%= common_password %>
resource_pool:
resource_directory_key: <%= root_domain %>-cc-resources-new
fog_connection:
provider: "AWS"
host: "s3.eden.klm.com"
scheme: "http"
port: 80
aws_signature_version: "2"
aws_access_key_id: "SECRET"
aws_secret_access_key: "SECRET"
packages:
app_package_directory_key: <%= root_domain %>-cc-packages-new
fog_connection:
provider: "AWS"
host: "s3.eden.com"
scheme: "http"
port: 80
aws_signature_version: "2"
aws_access_key_id: "SECRET"
aws_secret_access_key: "SECRET"
droplets:
droplet_directory_key: <%= root_domain %>-cc-droplets-new
fog_connection:
provider: "AWS"
host: "s3.eden.com"
scheme: "http"
port: 80
aws_signature_version: "2"
aws_access_key_id: "SECRET"
aws_secret_access_key: "SECRET"
buildpacks:
buildpack_directory_key: <%= root_domain %>-cc-buildpacks-new
fog_connection:
provider: "AWS"
host: "s3.eden.com"
scheme: "http"
port: 80
aws_signature_version: "2"
aws_access_key_id: "SECRET"
aws_secret_access_key: "SECRET"
quota_definitions:
free:
non_basic_services_allowed: true
total_services: 4
total_routes: 1000
memory_limit: 8192
paid:
non_basic_services_allowed: true
total_services: 32
total_routes: 1000
memory_limit: 204800
runaway:
non_basic_services_allowed: true
total_services: 500
total_routes: 1000
memory_limit: 204800
trial:
non_basic_services_allowed: false
total_services: 10
total_routes: 1000
memory_limit: 2048
trial_db_allowed: true
default_quota_definition: free
newrelic:
license_key: 21ac88fa53748a364b20697645321a81853a0251
hm9000_noop: false
system_buildpacks:
- name: staticfile_buildpack
package: buildpack_staticfile
- name: java_buildpack
package: buildpack_java
- name: ruby_buildpack
package: buildpack_ruby
- name: nodejs_buildpack
package: buildpack_nodejs
- name: go_buildpack
package: buildpack_go
- name: python_buildpack
package: buildpack_python
- name: php_buildpack
package: buildpack_php
- name: binary_buildpack
package: buildpack_binary
default_buildpacks:
- name: staticfile_buildpack
package: buildpack_staticfile
- name: java_buildpack
package: buildpack_java
- name: ruby_buildpack
package: buildpack_ruby
- name: nodejs_buildpack
package: buildpack_nodejs
- name: go_buildpack
package: buildpack_go
- name: python_buildpack
package: buildpack_python
- name: php_buildpack
package: buildpack_php
- name: binary_buildpack
package: buildpack_binary
install_buildpacks:
- name: staticfile_buildpack
package: buildpack_staticfile
- name: java_buildpack
package: buildpack_java
- name: ruby_buildpack
package: buildpack_ruby
- name: nodejs_buildpack
package: buildpack_nodejs
- name: go_buildpack
package: buildpack_go
- name: python_buildpack
package: buildpack_python
- name: php_buildpack
package: buildpack_php
- name: binary_buildpack
package: buildpack_binary
security_group_definitions:
- name: public_networks
rules:
- protocol: all
destination: 0.0.0.0-9.255.255.255
- protocol: all
destination: 11.0.0.0-169.253.255.255
- protocol: all
destination: 169.255.0.0-172.15.255.255
- protocol: all
destination: 172.32.0.0-192.167.255.255
- protocol: all
destination: 192.169.0.0-255.255.255.255
- protocol: all
destination: 171.0.0.0-171.255.255.255
- protocol: all
destination: 172.0.0.0-175.255.255.255
- protocol: all
destination: 10.0.0.0-11.255.255.255
- name: dns
rules:
- protocol: tcp
destination: 0.0.0.0/0
ports: '53'
- protocol: udp
destination: 0.0.0.0/0
ports: '53'
default_running_security_groups: ["public_networks", "dns"]
default_staging_security_groups: ["public_networks", "dns"]

ccng: *cc

login:
protocol: http
links:
home: http://console.<%= root_domain %>
passwd: http://console.<%= root_domain %>/password_resets/new
signup: http://console.<%= root_domain %>/register

uaa:
url: <%= protocol %>://uaa.<%= root_domain %>
spring_profiles: postgresql
no_ssl: <%= protocol == "http" %>
catalina_opts: -Xmx768m -XX:MaxPermSize=256m
resource_id: account_manager
jwt:
signing_key: |
-----BEGIN RSA PRIVATE KEY-----
MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1
JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6
-----END RSA PRIVATE KEY-----
verification_key: |
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d
-----END PUBLIC KEY-----
cc:
client_secret: <%= common_password %>
admin:
client_secret: <%= common_password %>
batch:
username: batchuser
password: <%= common_password %>
client:
autoapprove:
- cf
- vmc
- my
- micro
- support-signon
- login
- styx
clients:
login:
override: true
scope: openid
authorities: oauth.login
secret: c1oudc0w
authorized-grant-types:
authorization_code,client_credentials,refresh_token
redirect-uri: http://login.<%= root_domain %>
notifications:
secret: <%= common_password %>
authorities: cloud_controller.admin,scim.read
authorized-grant-types: client_credentials
servicesmgmt:
override: true
secret: serivcesmgmtsecret
scope: openid,cloud_controller.read,cloud_controller.write
authorities:
uaa.resource,oauth.service,clients.read,clients.write,clients.secret
authorized-grant-types:
authorization_code,client_credentials,password,implicit
redirect-uri: http://servicesmgmt.<%= root_domain
%>/auth/cloudfoundry/callback
autoapprove: true
doppler:
override: true
authorities: uaa.resource
secret: ilovesecrets
cloud_controller_username_lookup:
authorities: scim.userids
secret: <%= common_password %>
gorouter:
authorities:
clients.read,clients.write,clients.admin,route.admin,route.advertise
authorized-grant-types: client_credentials,refresh_token
scope: openid,cloud_controller_service_permissions.read
secret: <%= common_password %>
scim:
userids_enabled: true
users:
- admin|<%= common_password
%>|scim.write,scim.read,openid,cloud_controller.admin,uaa.admin,password.write
- services|<%= common_password
%>|scim.write,scim.read,openid,cloud_controller.admin
-
cloudfoundry|SECRET|scim.write,scim.read,openid,cloud_controller.admin





--
View this message in context: http://cf-dev.70369.x6.nabble.com/no-more-stdout-in-app-files-since-upgrade-to-214-tp1197p1231.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: no more APP logs when tailing the app since the upgrade from 207 to 214

James Bayer
 

that commit you link to is pretty old (well over a year) and is not the
problem between v207 and v214. there is likely some configuration issue. i
sent you a response on the other thread you opened and suggest that we can
try going through those questions.

On Mon, Aug 17, 2015 at 5:58 AM, Ramon Makkelie <ramon.makkelie(a)klm.com>
wrote:

since the upgrade from 207 to 214 i noticed 2 things
1) no more stdout and stderr in the logs/ dir off the app/container
someone pointet it out that this is removed in
https://github.com/cloudfoundry/dea_ng/commit/930d3236b155da8660175198f4a1e4f18bf3cb6d

2) no more APP logs shown when tailing the app
the only thing i see are the RTR logs
i check all the job specs/templates of the loggregator, doppler and
metron_agent
but i can't find anything


--
Thank you,

James Bayer


Re: no more stdout in app files since upgrade to 214

James Bayer
 

here are some things that should help us troubleshoot:

does "cf logs APPNAME --recent" show anything different?
how did you create your deployment manifest?
how many availability zones do you have in your deployment?
how many traffic controllers and doppler instances do you have?
is the dea_logging_agent co-located with the DEAs and your "runner" VMs
configured with jobs something like this [0]?

with our installations, we typically use the affectance tests (CATS) [1] to
cover platform functionality. there is also a set of acceptance tests just
for loggregator [2].

[0]
https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L229-L239
[1] https://github.com/cloudfoundry/cf-acceptance-tests
[2]
https://github.com/cloudfoundry/loggregator/tree/develop/bosh/jobs/loggregator-acceptance-tests

On Mon, Aug 17, 2015 at 1:11 AM, ramonskie <ramon.makkelie(a)klm.com> wrote:

okay so no problem there
the only thing now is that there is no streaming logs with the cf logs
command
from my APP only RTS
any ideas there?



--
View this message in context:
http://cf-dev.70369.x6.nabble.com/no-more-stdout-in-app-files-since-upgrade-to-214-tp1197p1217.html
Sent from the CF Dev mailing list archive at Nabble.com.
--
Thank you,

James Bayer


Re: Web proxy support in buildpacks

JT Archie <jarchie@...>
 

Jack,

For cached buildpacks, it would not be useful to set HTTP proxying. The
dependencies are bundled with the buildpack and are loaded via the local
file system, not HTTP.

Most of the buildpacks use curl to download the dependencies from the HTTP
server. You should be able to set the environment variables HTTP_PROXY or
HTTPS_PROXY to for curl to use the proxy server.
<http://curl.haxx.se/libcurl/c/CURLOPT_PROXY.html> If this works for you it
would be great to hear your feedback.

Kinds Regards,

JT

On Mon, Aug 17, 2015 at 9:26 AM, Jack Cai <greensight(a)gmail.com> wrote:

Currently I see that the Java buildpack and the PHP buildpack explicitly
mentioned in their doc that they can run behind a Web proxy, by setting the
HTTP_PROXY and HTTPS_RPOXY environment variable. And I suppose this is
supported in either the cached version or the uncached one, and for both
the old lucid64 stack and the new cflinuxfs2 stack (which has different
Ruby version). Do other buildpacks support the same? Aka node.js, python,
ruby, go, etc.

Thanks in advance!

Jack


Web proxy support in buildpacks

Jack Cai
 

Currently I see that the Java buildpack and the PHP buildpack explicitly
mentioned in their doc that they can run behind a Web proxy, by setting the
HTTP_PROXY and HTTPS_RPOXY environment variable. And I suppose this is
supported in either the cached version or the uncached one, and for both
the old lucid64 stack and the new cflinuxfs2 stack (which has different
Ruby version). Do other buildpacks support the same? Aka node.js, python,
ruby, go, etc.

Thanks in advance!

Jack

8141 - 8160 of 9426