Date   

Re: bosh terminates EC2 compilation vm before it started

Marco Voelz
 

Hi,

"execution expired" in most cases means that the registry cannot be reached from the newly created VM.

Make sure that your BOSH director is reachable on port 25777 from the new VM. As Gwenn points out, possible sources of the problem are your network and security group configuration.

Warm regards
Marco

On 28/03/16 06:31, "Gwenn Etourneau" <getourneau(a)pivotal.io<mailto:getourneau(a)pivotal.io>> wrote:

Can be security groups, network configuration and so on..


On Mon, Mar 28, 2016 at 1:01 PM, Younjin Jeong <younjin.jeong(a)gmail.com<mailto:younjin.jeong(a)gmail.com>> wrote:
Hi

I'm currently working on CF deploy to AWS Seoul region.
While I deploy Cloud Foundry 233 release, the compilation ec2 instances are keep terminated by bosh with ERROR message "Failed: execution expired"

I think it expired even before the instance finished its boot. And there're not much hints I could find from log messages.


bosh deploy


Deploying
---------
Are you sure you want to deploy? (type 'yes' to continue): yes

Director task 79
Started preparing deployment > Preparing deployment. Done (00:00:06)

Started preparing package compilation > Finding packages to compile. Done (00:00:00)

Started compiling packages
Started compiling packages > cli/2b0725c955992dec52458aeb9e764d9f4be18d0a
Started compiling packages > rootfs_cflinuxfs2/85e207d5f9485efb23128b5e0affb79dc168c260. Failed: execution expired (00:02:16)
Failed compiling packages > cli/2b0725c955992dec52458aeb9e764d9f4be18d0a: execution expired (00:02:32)


bosh task 79 --debug | egrep ERROR

root(a)ip-10-0-1-54:/tmp# bosh task 79 --debug | egrep ERROR
RSA 1024 bit CA certificates are loaded due to old openssl compatibility
Acting as user 'admin' on 'microbosh'
E, [2016-03-28 03:43:32 #2961] [create_vm(2e657855-c33a-45e5-bb43-7b33bbbce9b0, ...)] ERROR -- DirectorJobRunner: Failed to create instance: execution expired
E, [2016-03-28 03:43:49 #2961] [create_vm(417665c9-cf1f-443d-8058-d5f7f58b8244, ...)] ERROR -- DirectorJobRunner: Failed to create instance: execution expired
E, [2016-03-28 03:44:33 #2961] [compile_package(rootfs_cflinuxfs2/85e207d5f9485efb23128b5e0affb79dc168c260, bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3215)] ERROR -- DirectorJobRunner: error creating vm: execution expired
E, [2016-03-28 03:44:33 #2961] [] ERROR -- DirectorJobRunner: Worker thread raised exception: execution expired - /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/httpclient-2.7.1/lib/httpclient/session.rb:597:in `initialize'
E, [2016-03-28 03:44:49 #2961] [compile_package(cli/2b0725c955992dec52458aeb9e764d9f4be18d0a, bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3215)] ERROR -- DirectorJobRunner: error creating vm: execution expired
E, [2016-03-28 03:44:49 #2961] [] ERROR -- DirectorJobRunner: Worker thread raised exception: execution expired - /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/httpclient-2.7.1/lib/httpclient/session.rb:597:in `initialize'
E, [2016-03-28 03:44:49 #2961] [task:79] ERROR -- DirectorJobRunner: execution expired


Would you give me some hint to solve this?
Cheers,

--
Younjin Jeong
younjin.jeong(a)gmail.com<mailto:joon.lee(a)sparkandassociates.com>
Mobile : +82-10-7128-2074<tel:%2B82-10-7128-2074>


Question regarding "cf services" and "cf service"

Nils Eckert <Nils.Eckert@...>
 

Hi,

There seems to be a difference between how "cf services" and "cf service"
are working in regard of permissions and I would like to understand why.

As organization manager but without a role in a space, the "cf services"
command shows me all service instances that are available within the space.
However, when using "cf service" to show the service instance info for such
a service, I receive an error message telling me that the service instance
was not found.

EXAMPLE

cf services
Getting services in org Multi Region Team / space dev as
nils.eckert(a)de.ibm.com...
OK

name service plan bound apps last operation
logstash-drain user-provided

cf service logstash-drain
FAILED
Service instance logstash-drain not found


Mit freundlichen Grüßen / Kind regards



Nils Eckert


Software Engineer, Bluemix Development
IBM Cloud Platform Services


IBM Deutschland Research & Development GmbH
Schoenaicher Strasse 220
D-71032 Boeblingen


Phone:+49-7031-164297
eMail: nils.eckert(a)de.ibm.com

IBM Deutschland Research & Development GmbH / Vorsitzende des
Aufsichtsrats: Martina Koederitz
Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht
Stuttgart, HRB 243294


The M2M, IoT & Wearable Technology Ecosystem: 2015 - 2030 - Opportunities, Challenges, Strategies, Industry Verticals and Forecasts

SNS Research <e.hall@...>
 

Hello

Hope you are doing well.

I wanted to bring to your attention the latest SNS Research report in which you might be interested, " The M2M, IoT & Wearable Technology Ecosystem: 2015 - 2030 - Opportunities, Challenges, Strategies, Industry Verticals and Forecasts."

I believe this report will be highly applicable for you. If you would like to see the report sample or have any questions, please let me know.

Report Information:

Release Date: November 2015
Number of Pages: 1,113
Number of Tables and Figures: 553

Report Overview:

As consumer voice and data service revenues reach their saturation point, mobile operators are keen to capitalize on other avenues to drive revenue growth. One such opportunity is providing network connectivity for M2M (Machine to Machine) devices like smart meters, connected cars and healthcare monitors. Despite its low ARPU, M2M connectivity has opened a multi-billion dollar revenue opportunity for mobile operators, MVNOs and service aggregators, addressing the application needs of several verticals markets. By enabling network connectivity among physical objects, M2M has also initiated the IoT (Internet of Things) vision - a global network of sensors, equipment, appliances, smart devices and applications that can communicate in real time.

Another key opportunity is the monetization of wearable technology. Mobile device OEMs are aggressively investing in wearable devices, in order to offset declining margins in their traditional smartphone and tablet markets. As a result, the market has been flooded with a variety of smart bands, smart watches and other wearable devices capable of collecting, sending and processing data over mobile applications.

Eyeing opportunities to route huge volumes of traffic from these wearable devices, many service providers are now seeking to fit wearable technology with their M2M offerings, targeting both consumer and vertical markets. SNS Research expects that M2M and wearable devices can help IoT service providers pocket as much as $231 Billion in service revenue by the end of 2020, following a CAGR of 40% between 2015 and 2020.

Spanning over 1,110 pages, the "M2M, IoT & Wearable Technology Ecosystem: 2015 - 2030 - Opportunities, Challenges, Strategies, Industry Verticals and Forecasts" report package encompasses two comprehensive reports covering M2M, IoT and wearable technology:

The M2M & IoT Ecosystem: 2015 - 2030 - Opportunities, Challenges, Strategies, Industry Verticals and Forecasts
The Wearable Technology Ecosystem: 2015 - 2030 - Opportunities, Challenges, Strategies, Industry Verticals and Forecasts

This report package provides an in-depth assessment of the M2M, IoT and wearable technology ecosystem including enabling technologies, key trends, market drivers, challenges, vertical market applications, deployment case studies, collaborative initiatives, regulatory landscape, standardization, opportunities, future roadmap, value chain, ecosystem player profiles and strategies. The report also presents market size forecasts from 2015 till 2030. The forecasts are segmented into vertical, regional, technology and country submarkets.

The report package comes with an associated Excel datasheet suite covering quantitative data from all numeric forecasts presented in the two reports.

Topics Covered:

The report package covers the following topics:
M2M, IoT and wearable technology ecosystem
Market drivers and challenges
Enabling technologies and key trends
Network architecture and mobile operator business models
Applications, opportunities and deployment case studies for a range of vertical markets including automotive & transportation, asset management & logistics, consumer, energy & utilities, healthcare, home automation, intelligent buildings & infrastructure, military, professional sports, public safety & security, retail and hospitality
Regulatory landscape, collaborative initiatives and standardization
Industry roadmap and value chain assessment
Profiles and strategies of over 600 leading ecosystem players including enabling technology providers, wearable/M2M device OEMs, mobile operators, MVNOs, aggregators, IoT platform providers, system integrators and vertical market specialists
Strategic recommendations for ecosystem players
Market analysis and forecasts from 2015 till 2030

Key Questions Answered:
The report package provides answers to the following key questions:
How big is the M2M, IoT and wearable technology ecosystem?
What trends, challenges and barriers are influencing its growth?
How is the ecosystem evolving by segment and region?
What will the market size be in 2020 and at what rate will it grow?
Which regions, countries and verticals will see the highest percentage of growth?
Who are the key market players and what are their strategies?
How will M2M and wearable devices drive investments in cloud based IoT platforms, Big Data, analytics, network security and other technologies?
What are the growth prospects of cellular, satellite, LPWA, wireline and short range networking technologies?
What are the key applications of M2M, IoT and wearable technology across industry verticals?
How can mobile operators capitalize on the growing popularity of smart glasses and other wearable devices?
What strategies should enabling technology providers, wearable/M2M device OEMs, mobile operators, MVNOs, aggregators, IoT platform providers and other ecosystem players adopt to remain competitive?

Report Pricing:

Single User License: USD 3,500

Company Wide License: USD 4,500


Ordering Process:

Please contact Emily Hall at e.hall(a)snscommunication.com

And provide the following information:
Report Title:
Report License (Single User/Company Wide):
Name:
Email:
Job Title:
Company:

Please contact me if you have any questions, or wish to purchase a copy.

I look forward to hearing from you.

Kind Regards,

Emily Hall

Sales Director
Signals and Systems Research
Email: e.hall(a)snscommunication.com
Address: Reef Tower
Jumeirah Lake Towers
Sheikh Zayed Road
Dubai, UAE

To unsubscribe, please reply to this email with UNSUBSCRIBE in the subject line.


Re: Failed to deploy diego 0.1452.0 on openstack: database_z2/0 is not running after update

Yunata, Ricky <rickyy@...>
 

Hi Adrian,

Thanks for your reply. The log file for the database is too big to be attached by e-mail, so I have uploaded to dropbox.
You can access it here:
https://www.dropbox.com/sh/kfuc0uxyxsvb551/AACxn1Ie2VeF_zp_cpJJL-uWa?dl=0

Ricky Yunata
Software & Solution Specialist

Fujitsu Australia Software Technology Pty Ltd
14 Rodborough Road, Frenchs Forest NSW 2086, Australia
T +61 2 9452 9128 M +61 433 977 739 F +61 2 9975 2899
rickyy(a)fast.au.fujitsu.com
fastware.com.au

Please consider the environment before printing this email

-----Original Message-----
From: Adrian Zankich [mailto:azankich(a)pivotal.io]
Sent: Wednesday, 30 March 2016 3:39 AM
To: cf-dev(a)lists.cloudfoundry.org
Subject: [cf-dev] Re: Re: Re: Re: Failed to deploy diego 0.1452.0 on openstack: database_z2/0 is not running after update

Hi Ricky,

Thanks for the clarification, if you can give us the logs for all three etcd instances, we can help debug exactly whats going on. You can retrieve the logs from the etcd instances by running:
`bosh logs database_z1 0 && bosh logs database_z2 0 && bosh logs database_z3 0`.

Thanks,

Adrian
Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com


Loggregator /varz nozzle is deprecated

Jim CF Campbell
 

Per the thread in here
<https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/thread/E4XL4GTTDAPNDDEGE7NASKPMLESRIT3Z/#E4XL4GTTDAPNDDEGE7NASKPMLESRIT3Z>
a while back, the varz nozzle
<https://github.com/cloudfoundry-incubator/varz-firehose-nozzle> is now
longer actively supported.

Thanks,
The Loggregator Team

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Re: Does Diego support memory swap?

Will Pragnell <wpragnell@...>
 

Hi Sam,

Sorry for the slow response - Monday and Friday were public holidays in the
UK (where the Garden team are based) so we didn't get a chance to look in
to this.

I think your understanding of that code is correct. I'm not sure why
`/proc/meminfo` is reporting those values for SwapTotal and SwapFree. I'll
try some experiments of my own and see if I can get to the bottom of it.

Best,
Will

On 28 March 2016 at 03:41, Sam Dai <sam.dai(a)servicemax.com> wrote:

According to this code
https://github.com/cloudfoundry-incubator/garden-linux/blob/master/linux_container/limits.go#L74-L75,
memory.limit_in_bytes and memory.memsw.limit_in_bytes are set to the same
limit value, it looks like when memory usage exceeds the limit, the kernel
of diego won't swap out any pages


Re: Failed to deploy diego 0.1452.0 on openstack: database_z2/0 is not running after update

Adrian Zankich
 

Hi Ricky,

Thanks for the clarification, if you can give us the logs for all three etcd instances, we can help debug exactly whats going on. You can retrieve the logs from the etcd instances by running:
`bosh logs database_z1 0 && bosh logs database_z2 0 && bosh logs database_z3 0`.

Thanks,

Adrian


Re: Hi

Daniel Mikusa
 

On Mon, Mar 28, 2016 at 11:55 PM, Mukul Kansal <kansalmukul(a)gmail.com>
wrote:

Please need your inputs on above mail chain query.

Thanks


On Mon, Mar 28, 2016 at 10:14 PM, Dieu Cao <dcao(a)pivotal.io> wrote:

This mailing list has been retired.
Please post to cf-dev(a)lists.cloudfoundry.org


https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/

On Mon, Mar 28, 2016 at 12:32 AM, <kansalmukul(a)gmail.com> wrote:

Hi

I have deployed one webapp on CF that contains 3 websocket endpoints.
But when i tried to call these endpoints from my local through Tyrus API
[ws://websocket-server-1-qa.private.run.covisintrnd.com/websocket/event]

I am getting below handshake error

javax.websocket.DeploymentException: Handshake error.
at
org.glassfish.tyrus.client.ClientManager$3$1.run(ClientManager.java:674)
at
org.glassfish.tyrus.client.ClientManager$3.run(ClientManager.java:712)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
org.glassfish.tyrus.client.ClientManager$SameThreadExecutorService.execute(ClientManager.java:866)
at
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
at
org.glassfish.tyrus.client.ClientManager.connectToServer(ClientManager.java:511)
at
org.glassfish.tyrus.client.ClientManager.connectToServer(ClientManager.java:373)
at
com.standalone.websockets.WebSocketClient.<init>(WebSocketClient.java:30)
at com.standalone.websockets.MainClient.main(MainClient.java:19)
Caused by: org.glassfish.tyrus.core.HandshakeException: Response code
was not 101: 404.
It's getting a 404. Are you trying the correct URL?


at
org.glassfish.tyrus.client.TyrusClientEngine.processResponse(TyrusClientEngine.java:320)
at
org.glassfish.tyrus.container.grizzly.client.GrizzlyClientFilter.handleHandshake(GrizzlyClientFilter.java:346)
at
org.glassfish.tyrus.container.grizzly.client.GrizzlyClientFilter.handleRead(GrizzlyClientFilter.java:315)
at
org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at
org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
at
org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at
org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
at
org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
at
org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at
org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:536)
at
org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at
org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at
org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at
org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at
org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)
at
org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)
at java.lang.Thread.run(Thread.java:744)

Please suggest ? Is CF supports websocket request[ws] ?
Try wss instead of ws. Maybe that will help.

By the way, what CF are you targeting? Did you install your own? a public
cloud?

Dan


Re: Adding previous_instances and previous_memory fields to cf_event

Hristo Iliev
 

Hi again,

We created https://github.com/cloudfoundry/cloud_controller_ng/pull/569

Can you please take a look and comment on possible problems and missed
use-cases?

Regards,
Hristo Iliev

2016-03-24 18:06 GMT+02:00 Hristo Iliev <hsiliev(a)gmail.com>:

Hi Nick,

Adding previous state sounds good. Will add it in the PR as well.

Thanks,
Hristo Iliev

2016-03-24 17:29 GMT+02:00 Nicholas Calugar <ncalugar(a)pivotal.io>:

Hi Hristo,

I'm fine with a PR to add these two fields. Would it make sense to add
previous state as well?

Thanks,

Nick

On Thu, Mar 24, 2016 at 12:59 AM Dieu Cao <dcao(a)pivotal.io> wrote:

Hi Hristo,

I think a PR to add them would be fine, but I would defer to Nick
Calugar, who's taking over as PM of CAPI, to make that call.

-Dieu

On Wed, Mar 23, 2016 at 2:12 PM, Hristo Iliev <hsiliev(a)gmail.com> wrote:

Hi again,

Would you consider a PR that adds previous memory & instances to the
app usage events? Does this two additional fields make a sense?

Regards,
Hristo Iliev


Re: Hi

Mukul Kansal <kansalmukul@...>
 

Please need your inputs on above mail chain query.

Thanks

On Mon, Mar 28, 2016 at 10:14 PM, Dieu Cao <dcao(a)pivotal.io> wrote:

This mailing list has been retired.
Please post to cf-dev(a)lists.cloudfoundry.org

https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/

On Mon, Mar 28, 2016 at 12:32 AM, <kansalmukul(a)gmail.com> wrote:

Hi

I have deployed one webapp on CF that contains 3 websocket endpoints.
But when i tried to call these endpoints from my local through Tyrus API
[ws://websocket-server-1-qa.private.run.covisintrnd.com/websocket/event]

I am getting below handshake error

javax.websocket.DeploymentException: Handshake error.
at
org.glassfish.tyrus.client.ClientManager$3$1.run(ClientManager.java:674)
at
org.glassfish.tyrus.client.ClientManager$3.run(ClientManager.java:712)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
org.glassfish.tyrus.client.ClientManager$SameThreadExecutorService.execute(ClientManager.java:866)
at
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
at
org.glassfish.tyrus.client.ClientManager.connectToServer(ClientManager.java:511)
at
org.glassfish.tyrus.client.ClientManager.connectToServer(ClientManager.java:373)
at
com.standalone.websockets.WebSocketClient.<init>(WebSocketClient.java:30)
at com.standalone.websockets.MainClient.main(MainClient.java:19)
Caused by: org.glassfish.tyrus.core.HandshakeException: Response code was
not 101: 404.
at
org.glassfish.tyrus.client.TyrusClientEngine.processResponse(TyrusClientEngine.java:320)
at
org.glassfish.tyrus.container.grizzly.client.GrizzlyClientFilter.handleHandshake(GrizzlyClientFilter.java:346)
at
org.glassfish.tyrus.container.grizzly.client.GrizzlyClientFilter.handleRead(GrizzlyClientFilter.java:315)
at
org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at
org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
at
org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at
org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
at
org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
at
org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at
org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:536)
at
org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at
org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at
org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at
org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at
org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)
at
org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)
at java.lang.Thread.run(Thread.java:744)

Please suggest ? Is CF supports websocket request[ws] ?


Thanks
Mukul


Re: PHP buildpack looking for composer.json file recursively leads to detecting ASP.Net projects as PHP projects

Mike Dalessio
 

Thanks so much for bringing this up. This sounds like something that's
probably easily fixable; though it would be great if we could have a bit
more information about your directory layout.

Would you be willing to create a new Github issue at
https://github.com/cloudfoundry/php-buildpack/issues ?

-m

On Fri, Mar 25, 2016 at 5:19 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

OK. Sorry, I was thinking about code in a different spot. I would agree
that recursively searching is probably unnecessary.

This has probably not come up before because I don't think most users rely
on the detect behavior. If you set `-b` with `cf push` and pick a specific
build pack you can workaround this behavior.

Dan

On Fri, Mar 25, 2016 at 4:58 PM, Daniel E Grim <degrim(a)us.ibm.com> wrote:

This issue is with the latest PHP build pack code at
https://github.com/cloudfoundry/php-buildpack/blob/master/scripts/detect.py#L26
.

-Dan

----- Forwarded by Daniel E Grim/Raleigh/IBM on 03/25/2016 04:54 PM -----

From: Daniel Mikusa <dmikusa(a)pivotal.io>
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>
Date: 03/25/2016 04:52 PM
Subject: [cf-dev] Re: PHP buildpack looking for composer.json file
recursively leads to detecting ASP.Net projects as PHP projects
------------------------------



What version of the PHP build pack are you using? I know this has come
up before and I thought the build pack was changed to look in specific
locations.

Dan

On Fri, Mar 25, 2016 at 3:55 PM, Daniel E Grim <*degrim(a)us.ibm.com*
<degrim(a)us.ibm.com>> wrote:

Hi all,

The PHP buildpack is currently looking recursively for a
composer.json file within the project as part of it's detect script. This
is causing an issue for users trying to push an ASP.Net project which
contains bower packages that have a composer.json file within their
directory structure because their project gets detected by the PHP
buildpack. Does anyone know if there is a need for the PHP buildpack to
search for this file recursively, or would detecting it in the project root
be enough? If the file could be detected only in the project root, that
would stop the PHP buildpack from detecting ASP.Net projects because those
composer.json files are never in the main folder of the project but in the
wwwroot/scripts/lib folder instead.

Thanks,
Dan




Re: Failed to deploy diego 0.1452.0 on openstack: database_z2/0 is not running after update

Yunata, Ricky <rickyy@...>
 

Hi Adrian,

Thanks for your comment. I do have consul server in my cf-release deployment.
Currently it’s database_z2/0 that is failing, however if I stop all running etcds on database_z1 and database_z2 and then start it first on database_z2, it works. On the other hand, after the etcd in database_z2 works, the etcd in database_z1 wouldn’t start, so it seems that only 1 etcd can be run.

+---------------------------------------------------------------------------+---------+-----+-----------+---------------+
| VM | State | AZ | VM Type | IPs |
+---------------------------------------------------------------------------+---------+-----+-----------+---------------+
| api_z1/0 (aef6e8d4-e088-420c-89f8-c74c4be0f3c6) | running | n/a | large_z1 | 192.168.1.6 |
| consul_z1/0 (8b1972db-1a24-414a-a40b-924f0d880fda) | running | n/a | small_z1 | 192.168.1.22 |
| doppler_z1/0 (1f044051-be96-4235-bd74-21093156136e) | running | n/a | medium_z1 | 192.168.1.31 |
| etcd_z1/0 (fde0895e-39dc-4826-8f29-be6ef5bb9ee5) | running | n/a | medium_z1 | 192.168.1.18 |
| ha_proxy_z1/0 (20d2a2b4-4c57-4fc4-8f99-0a365fbc8246) | running | n/a | router_z1 | 192.168.1.10 |
| | | | | 137.172.74.81 |
| hm9000_z1/0 (0f124588-7dd0-4351-b606-42630e8bc300) | running | n/a | medium_z1 | 192.168.1.7 |
| loggregator_trafficcontroller_z1/0 (e904e07f-2877-421d-8336-65f8422c4592) | running | n/a | small_z1 | 192.168.1.32 |
| loggregator_z1/0 (28a5d336-9f5f-45a8-b427-73be37a1f37d) | running | n/a | medium_z1 | 192.168.1.9 |
| nats_z1/0 (6a56ebca-a1bb-4192-beb7-86f4ac11b3ca) | running | n/a | medium_z1 | 192.168.1.12 |
| nfs_z1/0 (876a10c2-212e-49a5-913e-2fcce0c215a6) | running | n/a | medium_z1 | 192.168.1.13 |
| router_z1/0 (1fafd912-7357-4d12-8bbd-fedc10f47d40) | running | n/a | router_z1 | 192.168.1.15 |
| runner_z1/0 (d80862bb-f3b9-43da-ab55-fc0af3f5b569) | running | n/a | runner_z1 | 192.168.1.8 |
| stats_z1/0 (2ab128db-8efc-4c2b-98fb-673b0ebaaba4) | running | n/a | small_z1 | 192.168.1.4 |
| uaa_z1/0 (76800329-ad05-442a-a18d-79cb98abec27) | running | n/a | medium_z1 | 192.168.1.5 |
+---------------------------------------------------------------------------+---------+-----+-----------+---------------+

+-----------------------------------------------------------+---------+-----+------------------+--------------+
| VM | State | AZ | VM Type | IPs |
+-----------------------------------------------------------+---------+-----+------------------+--------------+
| access_z1/0 (598f16db-60c2-4c13-bcec-85ae2a38102d) | running | n/a | access_z1 | 192.168.3.44 |
| access_z2/0 (a83d049d-6c95-417e-84f4-9aced8a9136f) | running | n/a | access_z2 | 192.168.4.56 |
| brain_z1/0 (a95c56bb-a84d-41b4-91b1-ade57c773dbe) | running | n/a | brain_z1 | 192.168.3.40 |
| brain_z2/0 (eb386b16-c8e4-4c04-9582-20f4161f6e03) | running | n/a | brain_z2 | 192.168.4.52 |
| cc_bridge_z1/0 (b9870145-26d7-4e59-9358-97c43db6a110) | running | n/a | cc_bridge_z1 | 192.168.3.42 |
| cc_bridge_z2/0 (7477b06f-e501-4757-abda-8e29c7c15464) | running | n/a | cc_bridge_z2 | 192.168.4.54 |
| cell_z1/0 (a6ef0a8c-52c0-4bd2-abfb-2fcf0101dd24) | running | n/a | cell_z1 | 192.168.3.41 |
| cell_z2/0 (36f012e3-2013-44aa-9a92-18161d6854ad) | running | n/a | cell_z2 | 192.168.4.53 |
| database_z1/0 (5428cca8-9832-42f4-9b3a-a822eb6d7e96) | running | n/a | database_z1 | 192.168.3.39 |
| database_z2/0 (16c88d30-fe70-4d42-8307-34cc85521ca7) | failing | n/a | database_z2 | 192.168.4.51 |
| database_z3/0 (c802162f-0681-479e-bb9c-98dac7d78941) | running | n/a | database_z3 | 192.168.5.31 |
| route_emitter_z1/0 (f7f7a8f3-9784-4b99-b0a5-6efb4d193cf5) | running | n/a | route_emitter_z1 | 192.168.3.43 |
| route_emitter_z2/0 (7f4e7fb7-7986-432e-a2e3-b298d3070753) | running | n/a | route_emitter_z2 | 192.168.4.55 |
+-----------------------------------------------------------+---------+-----+------------------+--------------+

Regards,
Ricky



From: Amit Gupta [mailto:agupta(a)pivotal.io]
Sent: Tuesday, 29 March 2016 4:30 AM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Failed to deploy diego 0.1452.0 on openstack: database_z2/0 is not running after update

The consul server cluster will be part of the cf-release deployment. This Diego deployment will be talking to the consul server cluster in the cf-release deployment.

On Mon, Mar 28, 2016 at 10:20 AM, Adrian Zankich <azankich(a)pivotal.io<mailto:azankich(a)pivotal.io>> wrote:
Hello Ricky,

I see that you're trying to run etcd in SSL mode, but I do not see a consul server instance in your instance list. Are you deploying a consul server job?

- Adrian

Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com


Re: Which component of cloud foundry will sunset the VARZ from CF release 233

Shannon Coen
 

I'm also interested in knowing when we can begin removing support for /VARZ
from CF components.

+bcc Jim Campbell (PM for Loggregator)

Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.

On Mon, Mar 28, 2016 at 12:34 AM, Patrick Wang <goupeng212wpp(a)gmail.com>
wrote:

We will migrate to CF release 233 soon. And the /varz support will be
sunset from HM9000 and DEA. We are planning to migrate our monitoring tools
from via VARZ to via firehose. However, we don't know the whole picture and
planning for the varz sunset on all cloud foundry components. We need to
know the plan of VARZ sunset for CloudController, Router,UAA, and the
service nodes (MongoaaS-Node,MyaaS-Node and etc). Do any one know about
that?


Re: Failed to deploy diego 0.1452.0 on openstack: database_z2/0 is not running after update

Amit Kumar Gupta
 

The consul server cluster will be part of the cf-release deployment. This
Diego deployment will be talking to the consul server cluster in the
cf-release deployment.

On Mon, Mar 28, 2016 at 10:20 AM, Adrian Zankich <azankich(a)pivotal.io>
wrote:

Hello Ricky,

I see that you're trying to run etcd in SSL mode, but I do not see a
consul server instance in your instance list. Are you deploying a consul
server job?

- Adrian


Re: Failed to deploy diego 0.1452.0 on openstack: database_z2/0 is not running after update

Adrian Zankich
 

Hello Ricky,

I see that you're trying to run etcd in SSL mode, but I do not see a consul server instance in your instance list. Are you deploying a consul server job?

- Adrian


Re: reg the haproxy config changes

Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM@Cisco) <ngnanase at cisco.com...>
 

Hi Amit

I could login now after deleting the hosts file.. Sorry for bothering you regarding this..
I could see the config changes in haproxy.ctmpl.erb and haproxy.conf.erb after logging into the haproxy VM

Thanks for your enormous support

Regards
Nithiyasri


From: Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco)
Sent: Monday, March 28, 2016 7:08 PM
To: 'Amit Gupta' <agupta(a)pivotal.io>
Cc: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org>; Jayarajan Ramapurath Kozhummal (jayark) <jayark(a)cisco.com>; Gwenn Etourneau <getourneau(a)pivotal.io>
Subject: reg the haproxy config changes

Hi Amit

We tried building a cf bosh release after adding extra rules specific to our application in the following 2 files
haproxy.ctmpl.erb (found this in cf-231 only, dint see it in cf-205)
haproxy.conf.erb

Also added parameters in spec file that these 2 files are using to fetch the values dynamically from the deployment manifest

ha_proxy.domain_root1:
description: " Domain Root1"
default: root1.com
ha_proxy.domain_root2:
description: " Domain Root2"
default: root2.com

After adding these 2 changes, we built a bosh release on top of cf-231 and deployed successfully and all VMs are running.
But I could not login to the haproxy VM, but other VMs I can login

We have done the same for cf-205 and works well and could login.
Kindly let me know what could be the issue.

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
b0:8a:62:f4:91:75:34:71:c1:23:fb:4b:80:1e:33:0d.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /root/.ssh/known_hosts:31
remove with: ssh-keygen -f "/root/.ssh/known_hosts" -R 10.20.0.20
ECDSA host key for 10.20.0.20 has changed and you have requested strict checking.
Host key verification failed.

Regards
Nithiyasri


Re: add Diego into our monitoring system

Jim CF Campbell
 

Hi Patrick,

The basic design philosophy is for all logging and metric data to be
transported by Loggregator to the firehose. At that point the design intent
is to filter as appropriate. Sorry, but not only do we have no plans to
filter at the data origin, we have roadmap items to add *more* data to
Loggregator with system component syslogs, and for PCF, custom app metrics.

Jim

On Mon, Mar 28, 2016 at 1:11 AM, Patrick Wang <goupeng212wpp(a)gmail.com>
wrote:

Hi Jim,
I pulled the source code of Datadog nozzle. Datadog nozzle is also get all
drain data from traffic controller and then do filter in the nozzle. The
nozzle still receive all drain data from traffic controller. That means, it
is a huge network overhead on metron/doppler/traffic controller. From my
perspective, to avoid hug network traffic, it is better to add filter on
the metron/doppler/traffic controller. Do you know if there is a plan to
add filter on the side of metron/doppler/traffic controller?
func getValue(envelope *events.Envelope) float64 {
switch envelope.GetEventType() {
case events.Envelope_ValueMetric:
return envelope.GetValueMetric().GetValue()
case events.Envelope_CounterEvent:
return float64(envelope.GetCounterEvent().GetTotal())
default:
panic("Unknown event type")
}
}
<<<<<<


--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


reg the haproxy config changes

Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM@Cisco) <ngnanase at cisco.com...>
 

Hi Amit

We tried building a cf bosh release after adding extra rules specific to our application in the following 2 files
haproxy.ctmpl.erb (found this in cf-231 only, dint see it in cf-205)
haproxy.conf.erb

Also added parameters in spec file that these 2 files are using to fetch the values dynamically from the deployment manifest

ha_proxy.domain_root1:
description: " Domain Root1"
default: root1.com
ha_proxy.domain_root2:
description: " Domain Root2"
default: root2.com

After adding these 2 changes, we built a bosh release on top of cf-231 and deployed successfully and all VMs are running.
But I could not login to the haproxy VM, but other VMs I can login

We have done the same for cf-205 and works well and could login.
Kindly let me know what could be the issue.

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
b0:8a:62:f4:91:75:34:71:c1:23:fb:4b:80:1e:33:0d.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /root/.ssh/known_hosts:31
remove with: ssh-keygen -f "/root/.ssh/known_hosts" -R 10.20.0.20
ECDSA host key for 10.20.0.20 has changed and you have requested strict checking.
Host key verification failed.

Regards
Nithiyasri


Re: Reg Combining the jobs of router and hm9000 in cf-231

Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM@Cisco) <ngnanase at cisco.com...>
 

Thank you Amit. It worked by changing the port to 17010.


From: Amit Gupta [mailto:agupta(a)pivotal.io]
Sent: Saturday, March 26, 2016 11:10 PM
To: Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco) <ngnanase(a)cisco.com>
Cc: Gwenn Etourneau <getourneau(a)pivotal.io>; Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org>
Subject: Re: Reg Combining the jobs of router and hm9000 in cf-231

Both HM analyzer and gorouter try to start a debug server serving on port 17001, so if you try to colocate them without any reconfiguration, you'll hit this port collision. I don't know if the port is configurable for HM, but for gorouter it certainly is:

https://github.com/cloudfoundry/cf-release/blob/v231/jobs/gorouter/spec#L37-L39

If I recall correctly, HM starts several debug servers on consecutive ports 17001 - 17009 or so, so you may want to configure the gorouter to use 17000 or 17010.

On Sat, Mar 26, 2016 at 10:34 AM, Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco) <ngnanase(a)cisco.com<mailto:ngnanase(a)cisco.com>> wrote:
Hi

We are using cf-231 and trying to reduce the number of Cf VMs. Tried combining router and hm9000
But after combining, we are getting the following error. Tried redeploying freshly , but no luck.. Pls let us know if we can combine them together..

{"timestamp":1459009125.658613443,"process_id":2894,"source":"vcap.hm9000.analyzer","log_level":"error","message":"Failed to start debug server - Error:listen tcp 0.0.0.0:17001<http://0.0.0.0:17001>: bind: address already in use","data":null}
{"timestamp":1459009195.683453321,"process_id":3080,"source":"vcap.hm9000.analyzer","log_level":"error","message":"Failed to start debug server - Error:listen tcp 0.0.0.0:17001<http://0.0.0.0:17001>: bind: address already in use","data":null}
{"timestamp":1459009265.704365969,"process_id":3278,"source":"vcap.hm9000.analyzer","log_level":"error","message":"Failed to start debug server - Error:listen tcp 0.0.0.0:17001<http://0.0.0.0:17001>: bind: address already in use","data":null}
{"timestamp":1459009335.719108820,"process_id":3464,"source":"vcap.hm9000.analyzer","log_level":"error","message":"Failed to start debug server - Error:listen tcp 0.0.0.0:17001<http://0.0.0.0:17001>: bind: address already in use","data":null}

Job in Manifest:
-------------------
- instances: 2
name: router_hm9000
networks:
- name: ccc-bosh-net
properties:
consul:
agent:
services:
hm9000: {}
gorouter: {}
metron_agent:
zone: zone
route_registrar:
routes:
- name: hm9000
port: 5155
registration_interval: 20s
tags:
component: HM9K
uris:
- hm9000.<%= $root_domain %>
resource_pool: medium
templates:
- name: consul_agent
release: cf
- name: gorouter
release: cf
- name: hm9000
release: cf
- name: metron_agent
release: cf
- name: route_registrar
release: cf
update: {}


Which component of cloud foundry will sunset the VARZ from CF release 233

Patrick Wang <goupeng212wpp@...>
 

We will migrate to CF release 233 soon. And the /varz support will be sunset from HM9000 and DEA. We are planning to migrate our monitoring tools from via VARZ to via firehose. However, we don't know the whole picture and planning for the varz sunset on all cloud foundry components. We need to know the plan of VARZ sunset for CloudController, Router,UAA, and the service nodes (MongoaaS-Node,MyaaS-Node and etc). Do any one know about that?

5021 - 5040 of 9425